Trading off complexity and epxlainability with GAMs
- 1 minUnderstanding the inner workings of ML algorithms is key to unlocking informed decision-making and responsible AI deployment. Explainability transforms opaque algorithms into interpretable insights, enabling us to trust, improve, and navigate the decisions guided by AI systems. Unlike black-box ML algorithms, Generalised Additive Models (GAMs) offer a structured framework that captures sophisticated relationships between variables in a transparent manner offering balance between complexity and explainability.
1. What is GAM

2. Pros
- Interpretability: Additivity makes possible interpretation of the marginal impact of a single variable independent of the values of the other variables
- Flexibility: GAM can capture common nonlinear patterns that a classic linear model would miss
- Statistical nature: You get confidence intervals for weights, significance tests, prediction intervals and much more
3. Cons
- Any link function that is not the identity function complicates the interpretation. Interactions also complicate the interpretation
- Nonlinear feature effects are either less intuitive (like the log transformation) or can no longer be summarized by a single number (e.g. spline functions)
- Performance of tree-based ensembles like the random forest or gradient tree boosting is in many cases is better