LEADER 01466nam a2200373 i 4500001 99129135750906421 005 20231129122544.0 006 m o d | 007 cr ||||||||||| 008 231129t20212023enka o 001 0 eng d 020 1-80324-362-7 035 (CKB)28852986100041 035 (MiAaPQ)EBC30949571 035 (Au-PeEL)EBL30949571 035 (OCoLC)1407573405 035 (OCoLC-P)1407573405 035 (FR-PaCSA)88948202 035 (CaSebORM)9781803235424 035 (DE-B1597)691517 035 (DE-B1597)9781803243627 035 (EXLCZ)9928852986100041 040 MiAaPQ |beng |erda |epn |cMiAaPQ |dMiAaPQ 044 pl |cPL 050 4 Q335 |b.M377 2021 072 7 COM025000 |2bisacsh 082 0 006.3 |223 100 1 Masís, Serg, |eauthor. 245 10 Interpretable Machine Learning with Python : |bBuild Explainable, Fair, and Robust High-Performance Models with Hands-on, Real-world Examples / |cSerg Masís, Aleksander Molak, and Denis Rothman. 250 Second edition. 264 1 Birmingham, England : |bPackt Publishing, |c[2021] 264 4 |c©2023 300 1 online resource (607 pages) 336 text |btxt |2rdacontent 337 computer |bc |2rdamedia 338 online resource |bcr |2rdacarrier 490 1 Expert insight 505 0 Cover -- Copyright -- Contributors -- Table of Contents -- Preface -- Chapter 1: Interpretation, Interpretability, and Explainability -- and Why Does It All Matter? -- Technical requirements -- What is machine learning interpretation? -- Understanding a simple weight prediction model -- Understanding the difference between interpretability and explainability -- What is interpretability? -- Beware of complexity -- When does interpretability matter? -- What are black-box models? -- What are white-box models? -- What is explainability? -- Why and when does explainability matter? -- A business case for interpretability -- Better decisions -- More trusted brands -- More ethical -- More profitable -- Summary -- Image sources -- Dataset sources -- Further reading -- Chapter 2: Key Concepts of Interpretability -- Technical requirements -- The mission -- Details about CVD -- The approach -- Preparations -- Loading the libraries -- Understanding and preparing the data -- The data dictionary -- Data preparation -- Interpretation method types and scopes -- Model interpretability method types -- Model interpretability scopes -- Interpreting individual predictions with logistic regression -- Appreciating what hinders machine learning interpretability -- Non-linearity -- Interactivity -- Non-monotonicity -- Mission accomplished -- Summary -- Further reading -- Chapter 3: Interpretation Challenges -- Technical requirements -- The mission -- The approach -- The preparations -- Loading the libraries -- Understanding and preparing the data -- The data dictionary -- Data preparation -- Reviewing traditional model interpretation methods -- Predicting minutes delayed with various regression methods -- Classifying flights as delayed or not delayed with various classification methods -- Training and evaluating the classification models. 505 8 Understanding limitations of traditional model interpretation methods -- Studying intrinsically interpretable (white-box) models -- Generalized linear models (GLMs) -- Linear regression -- Ridge regression -- Polynomial regression -- Logistic regression -- Decision trees -- CART decision trees -- RuleFit -- Interpretation and feature importance -- Nearest neighbors -- k-Nearest Neighbors -- Naïve Bayes -- Gaussian Naïve Bayes -- Recognizing the trade-off between performance and interpretability -- Special model properties -- The key property: explainability -- The remedial property: regularization -- Assessing performance -- Discovering newer interpretable (glass-box) models -- Explainable Boosting Machine (EBM) -- Global interpretation -- Local interpretation -- Performance -- GAMI-Net -- Global interpretation -- Local interpretation -- Performance -- Mission accomplished -- Summary -- Dataset sources -- Further reading -- Chapter 4: Global Model-Agnostic Interpretation Methods -- Technical requirements -- The mission -- The approach -- The preparations -- Loading the libraries -- Data preparation -- Model training and evaluation -- What is feature importance? -- Assessing feature importance with model-agnostic methods -- Permutation feature importance -- SHAP values -- Comprehensive explanations with KernelExplainer -- Faster explanations with TreeExplainer -- Visualize global explanations -- SHAP bar plot -- SHAP beeswarm plot -- Feature summary explanations -- Partial dependence plots -- SHAP scatter plot -- ALE plots -- Feature interactions -- SHAP bar plot with clustering -- 2D ALE plots -- PDP interactions plots -- Mission accomplished -- Summary -- Further reading -- Chapter 5: Local Model-Agnostic Interpretation Methods -- Technical requirements -- The mission -- The approach -- The preparations -- Loading the libraries. 505 8 Understanding and preparing the data -- The data dictionary -- Data preparation -- Leveraging SHAP's KernelExplainer for local interpretations with SHAP values -- Training a C-SVC model -- Computing SHAP values using KernelExplainer -- Local interpretation for a group of predictions using decision plots -- Local interpretation for a single prediction at a time using a force plot -- Employing LIME -- What is LIME? -- Local interpretation for a single prediction at a time using LimeTabularExplainer -- Using LIME for NLP -- Training a LightGBM model -- Local interpretation for a single prediction at a time using LimeTextExplainer -- Trying SHAP for NLP -- Comparing SHAP with LIME -- Mission accomplished -- Summary -- Dataset sources -- Further reading -- Chapter 6: Anchors and Counterfactual Explanations -- Technical requirements -- The mission -- Unfair bias in recidivism risk assessments -- The approach -- The preparations -- Loading the libraries -- Understanding and preparing the data -- The data dictionary -- Examining predictive bias with confusion matrices -- Data preparation -- Modeling -- Getting acquainted with our "instance of interest" -- Understanding anchor explanations -- Preparations for anchor and counterfactual explanations with alibi -- Local interpretations for anchor explanations -- Exploring counterfactual explanations -- Counterfactual explanations guided by prototypes -- Counterfactual instances and much more with WIT -- Configuring WIT -- Datapoint editor -- Performance & -- Fairness -- Mission accomplished -- Summary -- Dataset sources -- Further reading -- Chapter 7: Visualizing Convolutional Neural Networks -- Technical requirements -- The mission -- The approach -- Preparations -- Loading the libraries -- Understanding and preparing the data -- Data preparation -- Inspect data -- The CNN models -- Load the CNN model. 505 8 Assessing the CNN classifier with traditional interpretation methods -- Determining what misclassifications to focus on -- Visualizing the learning process with activation-based methods -- Intermediate activations -- Evaluating misclassifications with gradient-based attribution methods -- Saliency maps -- Guided Grad-CAM -- Integrated gradients -- Bonus method: DeepLIFT -- Tying it all together -- Understanding classifications with perturbation-based attribution methods -- Feature ablation -- Occlusion sensitivity -- Shapley value sampling -- KernelSHAP -- Tying it all together -- Mission accomplished -- Summary -- Further reading -- Chapter 8: Interpreting NLP Transformers -- Technical requirements -- The mission -- The approach -- The preparations -- Loading the libraries -- Understanding and preparing the data -- The data dictionary -- Loading the model -- Visualizing attention with BertViz -- Plotting all attention with the model view -- Diving into layer attention with the head view -- Interpreting token attributions with integrated gradients -- LIME, counterfactuals, and other possibilities with the LIT -- Mission accomplished -- Summary -- Further reading -- Chapter 9: Interpretation Methods for Multivariate Forecasting and Sensitivity Analysis -- Technical requirements -- The mission -- The approach -- The preparation -- Loading the libraries -- Understanding and preparing the data -- The data dictionary -- Understanding the data -- Data preparation -- Loading the LSTM model -- Assessing time series models with traditional interpretation methods -- Using standard regression metrics -- Predictive error aggregations -- Evaluating the model like a classification problem -- Generating LSTM attributions with integrated gradients -- Computing global and local attributions with SHAP's KernelExplainer -- Why use KernelExplainer?. 505 8 Defining a strategy to get it to work with a multivariate time series model -- Laying the groundwork for the permutation approximation strategy -- Computing the SHAP values -- Identifying influential features with factor prioritization -- Computing Morris sensitivity indices -- Analyzing the elementary effects -- Quantifying uncertainty and cost sensitivity with factor fixing -- Generating and predicting on Saltelli samples -- Performing Sobol sensitivity analysis -- Incorporating a realistic cost function -- Mission accomplished -- Summary -- Dataset and image sources -- Further reading -- Chapter 10: Feature Selection and Engineering for Interpretability -- Technical requirements -- The mission -- The approach -- The preparations -- Loading the libraries -- Understanding and preparing the data -- Understanding the effect of irrelevant features -- Creating a base model -- Evaluating the model -- Training the base model at different max depths -- Reviewing filter-based feature selection methods -- Basic filter-based methods -- Constant features with a variance threshold -- Quasi-constant features with value_counts -- Duplicating features -- Removing unnecessary features -- Correlation filter-based methods -- Ranking filter-based methods -- Comparing filter-based methods -- Exploring embedded feature selection methods -- Discovering wrapper, hybrid, and advanced feature selection methods -- Wrapper methods -- Sequential forward selection (SFS) -- Hybrid methods -- Recursive Feature Elimination (RFE) -- Advanced methods -- Model-agnostic feature importance -- Genetic algorithms -- Evaluating all feature-selected models -- Considering feature engineering -- Mission accomplished -- Summary -- Dataset sources -- Further reading -- Chapter 11: Bias Mitigation and Causal Inference Methods -- Technical requirements -- The mission -- The approach. 505 8 The preparations. 520 Interpretable Machine Learning with Python, Second Edition, brings to light the key concepts of interpreting machine learning models by analyzing real-world data, providing you with a wide range of skills and tools to decipher the results of even the most complex models. Build your interpretability toolkit with several use cases, from flight delay prediction to waste classification to COMPAS risk assessment scores. This book is full of useful techniques, introducing them to the right use case. Learn traditional methods, such as feature importance and partial dependence plots to integrated gradients for NLP interpretations and gradient-based attribution methods, such as saliency maps. In addition to the step-by-step code, you’ll get hands-on with tuning models and training data for interpretability by reducing complexity, mitigating bias, placing guardrails, and enhancing reliability. By the end of the book, you’ll be confident in tackling interpretability challenges with black-box models using tabular, language, image, and time series data. This book is for data scientists, machine learning developers, machine learning engineers, MLOps engineers, and data stewards who have an increasingly critical responsibility to explain how the artificial intelligence systems they develop work, their impact on decision making, and how they identify and manage bias. It’s also a useful resource for self-taught ML enthusiasts and beginners who want to go deeper into the subject matter, though a good grasp of the Python programming language is needed to implement the examples. 588 Description based on print version record. 500 Includes index. 650 0 Machine learning. 650 0 Python (Computer program language) 650 0 Data mining. 700 1 Molak, Aleksander, |eauthor. 700 1 Rothman, Denis, |eauthor. 776 |z1-80323-542-X 830 0 Expert insight. 906 BOOK