CinDay-RUG-IML-2018

Slides and other material for Cincinnati-Dayton useR presentation on interpretable machine learning with R

Stars
9

Interpretable Machine Learning Presentation

Slides, code, and data for Interpretable Machine Learning presented September 18, 2018 at the Cincinnati-Dayton UseR meet up. Launch slides

Overview

It is not enough to identify a machine learning model that optimizes predictive performance. Rather, understanding the models logic with global and local interpretability approaches is necessary for our model to be trusted and adopted for business decisions. This intermediate-to-advanced R presentation will introduce you to the concept of interpretable machine learning and various practical approaches to extract unique insights about the underlying logic of machine learning models.

Learning Objectives

After presentation, learners will be able to

  1. Explain what machine learning interpretability is and the components
    that are involved.
  2. Understand what models are naturally more interpretable than others
    and why.
  3. Discuss the differences between global and local interpretations.
  4. Apply practical approaches to gain global understanding of ML
    models.
  5. Apply practical approaches to gain local understanding of ML models.

Prerequisites

A strong understanding of programming in R and fundamental knowledge of machine learning models are required for success in this training.

Overview

The following is an outline of the material covered in this training:

  • Introduction
    • A mental model of machine learning interpretability
    • The focus of this presentation
  • Terminology to consider
    • Interpretable models vs model interpretation
    • Model specific vs model agnostic
    • Scope of interpretability Prerequisites
    • Packages, data, and models used in presentation
    • Model agnostic procedures
  • Global interpretation
    • Feature importance
      • Permutation-based feature importance
    • Feature effects
      • Partial dependence
      • Interactions
  • Local interpretation
    • Feature effects
      • ICE curves
    • Feature importance
      • LIME
      • Shapley values
      • Breakdown
  • Summary of solutions
  • Concluding remarks
    • Where to learn more