responsible_ml_material

Material for the lecture Responsible ML

Stars
19

Responsible ML with Insurance Applications

Welcome to our lecture. It covers the following main topics:

  • Statistical learning, model comparison, and calibration assessment (Christian)
  • Explainability (Michael)

From time to time, we will update the material linked below. You can also clone the repository with

"git clone https://github.com/lorentzenchr/responsible_ml_material.git"

Christian's Material

Slides

Slides (pdf)

Main reference

Tobias Fissler, Christian Lorentzen, and Michael Mayer. “Model Comparison and Calibration Assessment: User Guide for Consistent Scoring Functions in Machine Learning and Actuarial Practice”. In: (2022). doi: 10.48550/ARXIV.2202.12780

Python and R code for the tutorial

Michael's Material

Slides

Slides XAI (pdf)

Lecture notes

Python notebooks (ipynb)

  1. Introduction
  2. Explaining Models
  3. Improving Explainability

Setup

Python: We use Python 3.11 and the packages specified here.

(Note for R users: You can work with R >= 4.3 and the packages loaded in the notebooks.)

Additional Literature

Model evaluation and scoring functions

Explainability

  • C. Lorentzen and M. Mayer. “Peeking into the Black Box: An Actuarial Case Study for Interpretable Machine Learning”. In: SSRN Manuscript ID 3595944 (2020). doi: 10.2139/ssrn.3595944.
  • M. Mayer, D. Meier, and M. V. Wüthrich. “SHAP for Actuaries: Explain Any Model”. In: SSRN Manuscript ID 4389797 (2023) doi: 10.2139/ssrn.4389797.
  • Christoph Molnar. Interpretable Machine Learning. 1st ed. Raleigh, North Carolina: Lulu.com, 2019. isbn: 978-0-244-76852-2. url: https://christophm.github.io/interpretable-ml-book

Books on responsible ML or AI

  • Alyssa Simpson Rochwerger and Wilson Pang. Real World AI: A Practical Guide for Responsible Machine Learning. Lioncrest Publishing, 2021
  • Patrick Hall, James Curtis, and Parul Pandey. Machine Learning for High-Risk Applications. O’Reilly Media, Inc., 2022