Skip to search
Skip to main content
Search in
Keyword
Title (keyword)
Author (keyword)
Subject (keyword)
Title starts with
Subject (browse)
Author (browse)
Author (sorted by title)
Call number (browse)
search for
Search
Advanced Search
Bookmarks
(
0
)
Princeton University Library Catalog
Start over
Cite
Send
to
SMS
Email
EndNote
RefWorks
RIS
Printer
Bookmark
Interpretable Machine Learning with Python : Build Explainable, Fair, and Robust High-Performance Models with Hands-on, Real-world Examples / Serg Masís, Aleksander Molak, and Denis Rothman.
Author
Masís, Serg
[Browse]
Format
Book
Language
English
Εdition
Second edition.
Published/Created
Birmingham, England : Packt Publishing, [2021]
©2023
Description
1 online resource (607 pages)
Availability
Available Online
O'Reilly Online Learning: Academic/Public Library Edition
Details
Subject(s)
Machine learning
[Browse]
Python (Computer program language)
[Browse]
Data mining
[Browse]
Author
Molak, Aleksander
[Browse]
Rothman, Denis
[Browse]
Series
Expert insight.
[More in this series]
Summary note
Interpretable Machine Learning with Python, Second Edition, brings to light the key concepts of interpreting machine learning models by analyzing real-world data, providing you with a wide range of skills and tools to decipher the results of even the most complex models. Build your interpretability toolkit with several use cases, from flight delay prediction to waste classification to COMPAS risk assessment scores. This book is full of useful techniques, introducing them to the right use case. Learn traditional methods, such as feature importance and partial dependence plots to integrated gradients for NLP interpretations and gradient-based attribution methods, such as saliency maps. In addition to the step-by-step code, you’ll get hands-on with tuning models and training data for interpretability by reducing complexity, mitigating bias, placing guardrails, and enhancing reliability. By the end of the book, you’ll be confident in tackling interpretability challenges with black-box models using tabular, language, image, and time series data. This book is for data scientists, machine learning developers, machine learning engineers, MLOps engineers, and data stewards who have an increasingly critical responsibility to explain how the artificial intelligence systems they develop work, their impact on decision making, and how they identify and manage bias. It’s also a useful resource for self-taught ML enthusiasts and beginners who want to go deeper into the subject matter, though a good grasp of the Python programming language is needed to implement the examples.
Notes
Includes index.
Source of description
Description based on print version record.
Contents
Cover
Copyright
Contributors
Table of Contents
Preface
Chapter 1: Interpretation, Interpretability, and Explainability
and Why Does It All Matter?
Technical requirements
What is machine learning interpretation?
Understanding a simple weight prediction model
Understanding the difference between interpretability and explainability
What is interpretability?
Beware of complexity
When does interpretability matter?
What are black-box models?
What are white-box models?
What is explainability?
Why and when does explainability matter?
A business case for interpretability
Better decisions
More trusted brands
More ethical
More profitable
Summary
Image sources
Dataset sources
Further reading
Chapter 2: Key Concepts of Interpretability
The mission
Details about CVD
The approach
Preparations
Loading the libraries
Understanding and preparing the data
The data dictionary
Data preparation
Interpretation method types and scopes
Model interpretability method types
Model interpretability scopes
Interpreting individual predictions with logistic regression
Appreciating what hinders machine learning interpretability
Non-linearity
Interactivity
Non-monotonicity
Mission accomplished
Chapter 3: Interpretation Challenges
The preparations
Reviewing traditional model interpretation methods
Predicting minutes delayed with various regression methods
Classifying flights as delayed or not delayed with various classification methods
Training and evaluating the classification models.
Understanding limitations of traditional model interpretation methods
Studying intrinsically interpretable (white-box) models
Generalized linear models (GLMs)
Linear regression
Ridge regression
Polynomial regression
Logistic regression
Decision trees
CART decision trees
RuleFit
Interpretation and feature importance
Nearest neighbors
k-Nearest Neighbors
Naïve Bayes
Gaussian Naïve Bayes
Recognizing the trade-off between performance and interpretability
Special model properties
The key property: explainability
The remedial property: regularization
Assessing performance
Discovering newer interpretable (glass-box) models
Explainable Boosting Machine (EBM)
Global interpretation
Local interpretation
Performance
GAMI-Net
Chapter 4: Global Model-Agnostic Interpretation Methods
Model training and evaluation
What is feature importance?
Assessing feature importance with model-agnostic methods
Permutation feature importance
SHAP values
Comprehensive explanations with KernelExplainer
Faster explanations with TreeExplainer
Visualize global explanations
SHAP bar plot
SHAP beeswarm plot
Feature summary explanations
Partial dependence plots
SHAP scatter plot
ALE plots
Feature interactions
SHAP bar plot with clustering
2D ALE plots
PDP interactions plots
Chapter 5: Local Model-Agnostic Interpretation Methods
Loading the libraries.
Leveraging SHAP's KernelExplainer for local interpretations with SHAP values
Training a C-SVC model
Computing SHAP values using KernelExplainer
Local interpretation for a group of predictions using decision plots
Local interpretation for a single prediction at a time using a force plot
Employing LIME
What is LIME?
Local interpretation for a single prediction at a time using LimeTabularExplainer
Using LIME for NLP
Training a LightGBM model
Local interpretation for a single prediction at a time using LimeTextExplainer
Trying SHAP for NLP
Comparing SHAP with LIME
Chapter 6: Anchors and Counterfactual Explanations
Unfair bias in recidivism risk assessments
Examining predictive bias with confusion matrices
Modeling
Getting acquainted with our "instance of interest"
Understanding anchor explanations
Preparations for anchor and counterfactual explanations with alibi
Local interpretations for anchor explanations
Exploring counterfactual explanations
Counterfactual explanations guided by prototypes
Counterfactual instances and much more with WIT
Configuring WIT
Datapoint editor
Performance &
Fairness
Chapter 7: Visualizing Convolutional Neural Networks
Inspect data
The CNN models
Load the CNN model.
Assessing the CNN classifier with traditional interpretation methods
Determining what misclassifications to focus on
Visualizing the learning process with activation-based methods
Intermediate activations
Evaluating misclassifications with gradient-based attribution methods
Saliency maps
Guided Grad-CAM
Integrated gradients
Bonus method: DeepLIFT
Tying it all together
Understanding classifications with perturbation-based attribution methods
Feature ablation
Occlusion sensitivity
Shapley value sampling
KernelSHAP
Chapter 8: Interpreting NLP Transformers
Loading the model
Visualizing attention with BertViz
Plotting all attention with the model view
Diving into layer attention with the head view
Interpreting token attributions with integrated gradients
LIME, counterfactuals, and other possibilities with the LIT
Chapter 9: Interpretation Methods for Multivariate Forecasting and Sensitivity Analysis
The preparation
Understanding the data
Loading the LSTM model
Assessing time series models with traditional interpretation methods
Using standard regression metrics
Predictive error aggregations
Evaluating the model like a classification problem
Generating LSTM attributions with integrated gradients
Computing global and local attributions with SHAP's KernelExplainer
Why use KernelExplainer?.
Defining a strategy to get it to work with a multivariate time series model
Laying the groundwork for the permutation approximation strategy
Computing the SHAP values
Identifying influential features with factor prioritization
Computing Morris sensitivity indices
Analyzing the elementary effects
Quantifying uncertainty and cost sensitivity with factor fixing
Generating and predicting on Saltelli samples
Performing Sobol sensitivity analysis
Incorporating a realistic cost function
Dataset and image sources
Chapter 10: Feature Selection and Engineering for Interpretability
Understanding the effect of irrelevant features
Creating a base model
Evaluating the model
Training the base model at different max depths
Reviewing filter-based feature selection methods
Basic filter-based methods
Constant features with a variance threshold
Quasi-constant features with value_counts
Duplicating features
Removing unnecessary features
Correlation filter-based methods
Ranking filter-based methods
Comparing filter-based methods
Exploring embedded feature selection methods
Discovering wrapper, hybrid, and advanced feature selection methods
Wrapper methods
Sequential forward selection (SFS)
Hybrid methods
Recursive Feature Elimination (RFE)
Advanced methods
Model-agnostic feature importance
Genetic algorithms
Evaluating all feature-selected models
Considering feature engineering
Chapter 11: Bias Mitigation and Causal Inference Methods
The approach.
The preparations.
Show 199 more Contents items
ISBN
1-80324-362-7
OCLC
1407573405
Statement on language in description
Princeton University Library aims to describe library materials in a manner that is respectful to the individuals and communities who create, use, and are represented in the collections we manage.
Read more...
Other views
Staff view
Ask a Question
Suggest a Correction
Report Harmful Language
Supplementary Information