Skip to search
Skip to main content
Search in
Keyword
Title (keyword)
Author (keyword)
Subject (keyword)
Title starts with
Subject (browse)
Author (browse)
Author (sorted by title)
Call number (browse)
search for
Search
Advanced Search
Bookmarks
(
0
)
Princeton University Library Catalog
Start over
Cite
Send
to
SMS
Email
EndNote
RefWorks
RIS
Printer
Bookmark
Adversarial AI Attacks, Mitigations, and Defense Strategies : A Cybersecurity Professional's Guide to AI Attacks, Threat Modeling, and Securing AI with MLSecOps / John Sotiropoulos.
Author
Sotiropoulos, John
[Browse]
Format
Book
Language
English
Εdition
First edition.
Published/Created
Birmingham, UK : Packt Publishing Ltd., [2024]
©2024
Description
1 online resource (586 pages)
Details
Subject(s)
Artificial intelligence
[Browse]
Summary note
Understand how adversarial attacks work against predictive and generative AI, and learn how to safeguard AI and LLM projects with practical examples leveraging OWASP, MITRE, and NIST Key Features Understand the connection between AI and security by learning about adversarial AI attacks Discover the latest security challenges in adversarial AI by examining GenAI, deepfakes, and LLMs Implement secure-by-design methods and threat modeling, using standards and MLSecOps to safeguard AI systems Purchase of the print or Kindle book includes a free PDF eBook Book Description Adversarial attacks trick AI systems with malicious data, creating new security risks by exploiting how AI learns. This challenges cybersecurity as it forces us to defend against a whole new kind of threat. This book demystifies adversarial attacks and equips cybersecurity professionals with the skills to secure AI technologies, moving beyond research hype or business-as-usual strategies. The strategy-based book is a comprehensive guide to AI security, presenting a structured approach with practical examples to identify and counter adversarial attacks. This book goes beyond a random selection of threats and consolidates recent research and industry standards, incorporating taxonomies from MITRE, NIST, and OWASP. Next, a dedicated section introduces a secure-by-design AI strategy with threat modeling to demonstrate risk-based defenses and strategies, focusing on integrating MLSecOps and LLMOps into security systems. To gain deeper insights, you'll cover examples of incorporating CI, MLOps, and security controls, including open-access LLMs and ML SBOMs. Based on the classic NIST pillars, the book provides a blueprint for maturing enterprise AI security, discussing the role of AI security in safety and ethics as part of Trustworthy AI. By the end of this book, you'll be able to develop, deploy, and secure AI systems effectively. What you will learn Understand poisoning, evasion, and privacy attacks and how to mitigate them Discover how GANs can be used for attacks and deepfakes Explore how LLMs change security, prompt injections, and data exposure Master techniques to poison LLMs with RAG, embeddings, and fine-tuning Explore supply-chain threats and the challenges of open-access LLMs Implement MLSecOps with CIs, MLOps, and SBOMs Who this book is for This book tackles AI security from both angles - offense and defense. AI builders (developers and engineers) will learn how to create secure systems, while cybersecurity professionals, such as security architects, analysts, engineers, ethical hackers, penetration testers, and incident responders will discover methods to combat threats and mitigate risks posed by attackers. The book also provides a secure-by-design approach for leaders to build AI with security in mind. To get the most out of this book, you'll need a basic understanding of security, ML concepts, and Python.
Source of description
Description based on publisher supplied metadata and other sources.
Description based on print version record.
Contents
Cover
Title Page
Copyright
Dedication
Contributors
Table of Contents
Preface
Part 1: Introduction to Adversarial AI
Chapter 1: Getting Started with AI
Understanding AI and ML
Types of ML and the ML life cycle
Key algorithms in ML
Neural networks and deep learning
ML development tools
Summary
Further reading
Chapter 2: Building Our Adversarial Playground
Technical requirements
Setting up your development environment
Python installation
Creating your virtual environment
Installing packages
Registering your virtual environment with Jupyter notebooks
Verifying your installation
Hands-on basic baseline ML
Simple NNs
Developing our target AI service with CNNs
Setup and data collection
Data exploration
Data preprocessing
Algorithm selection and building the model
Model training
Model evaluation
Model deployment
Inference service
ML development at scale
Google Colab
AWS SageMaker
Azure Machine Learning services
Lambda Labs Cloud
Chapter 3: Security and Adversarial AI
Security fundamentals
Threat modeling
Risks and mitigations
DevSecOps
Securing our adversarial playground
Host security
Network protection
Authentication
Data protection
Access control
Securing code and artifacts
Secure code
Securing dependencies with vulnerability scanning
Secret scanning
Securing Jupyter Notebooks
Securing models from malicious code
Integrating with DevSecOps and MLOps pipelines
Bypassing security with adversarial AI
Our first adversarial AI attack
Traditional cybersecurity and adversarial AI
Adversarial AI landscape
Part 2: Model Development Attacks
Chapter 4: Poisoning Attacks
Basics of poisoning attacks
Definition and examples.
Types of poisoning attacks
Poisoning attack examples
Why it matters
Staging a simple poisoning attack
Creating poisoned samples
Backdoor poisoning attacks
Creating backdoor triggers with ART
Poisoning data with ART
Hidden-trigger backdoor attacks
Clean-label attacks
Advanced poisoning attacks
Mitigations and defenses
Cybercity defenses with MLOps
Anomaly detection
Robustness tests against poisoning
Advanced poisoning defenses with ART
Adversarial training
Creating a defense strategy
Chapter 5: Model Tampering with Trojan Horses and Model Reprogramming
Injecting backdoors using pickle serialization
Attack scenario
Defenses and mitigations
Injecting Trojan horses with Keras Lambda layers
Trojan horses with custom layers
Neural payload injection
Attacking edge AI
Model hijacking
Trojan horse code injection
Model reprogramming
Chapter 6: Supply Chain Attacks and Adversarial AI
Traditional supply chain risks and AI
Risks from outdated and vulnerable components
Risks from AI's dependency on live data
Securing AI from vulnerable components
Enhanced security - allow approved-only packages
Client configuration for private PyPI repositories
Additional private PyPI security
Use of SBOMs
AI supply chain risks
The double-edged sword of transfer learning
Model poisoning
Model tampering
Secure model provenance and governance for pre-trained models
MLOps and private model repositories
Data poisoning
Using data poisoning to affect sentiment analysis
AI/ML SBOMs
Summary.
Part 3: Attacks on Deployed AI
Chapter 7: Evasion Attacks against Deployed AI
Fundamentals of evasion attacks
Importance of understanding evasion attacks
Reconnaissance techniques for evasion attacks
Perturbations and image evasion attack techniques
Evasion attack scenarios
One-step perturbation with FGSM
Basic Iterative Method (BIM)
Jacobian-based Saliency Map Attack (JSMA)
Carlini and Wagner (C&
W) attack
Projected Gradient Descent (PGD)
Adversarial patches - bridging digital and physical evasion techniques
NLP evasion attacks with BERT using TextAttack
Attack scenario - sentiment analysis
Attack example
Attack scenario - natural language inference
Universal Adversarial Perturbations (UAPs)
Black-box attacks with transferability
Defending against evasion attacks
Mitigation strategies overview
Input preprocessing
Model hardening techniques
Model ensembles
Certified defenses
Chapter 8: Privacy Attacks - Stealing Models
Understanding privacy attacks
Stealing models with model extraction attacks
Functionally equivalent extraction
Learning-based model extraction attacks
Generative student-teacher extraction (distillation) attacks
Attack example against our CIFAR-10 CNN
Prevention measures
Detection measures
Model ownership identification and recovery
Chapter 9: Privacy Attacks - Stealing Data
Understanding model inversion attacks
Types of model inversion attacks
Exploitation of model confidence scores
GAN-assisted model inversion
Example model inversion attack
Understanding inference attacks
Attribute inference attacks
Meta-classifiers.
Poisoning-assisted inference
Attack scenarios
Mitigations
Example attribute inference attack
Membership inference attacks
Statistical thresholds for ML leaks
Label-only data transferring attack
Blind membership inference attacks
White box attacks
Example membership inference attack using the ART
Chapter 10: Privacy-Preserving AI
Privacy-preserving ML and AI
Simple data anonymization
Advanced anonymization
K-anonymity
Anonymization and geolocation data
Anonymizing rich media
Differential privacy (DP)
Federated learning (FL)
Split learning
Advanced encryption options for privacy-preserving ML
Secure multi-party computation (secure MPC)
Homomorphic encryption
Advanced ML encryption techniques in practice
Applying privacy-preserving ML techniques
Part 4: Generative AI and Adversarial Attacks
Chapter 11: Generative AI - A New Frontier
A brief introduction to generative AI
A brief history of the evolution of generative AI
Generative AI technologies
Using GANs
Developing a GAN from scratch
WGANs and custom loss functions
Using pre-trained GANs
Pix2Pix
CycleGAN
Pix2PixHD
Progressive Growing of GANs (PGGAN)
BigGAN
StarGAN v2
StyleGAN series
Chapter 12: Weaponizing GANs for Deepfakes and Adversarial Attacks
Use of GANs for deepfakes and deepfake detection
Using StyleGAN to generate convincing fake images
Creating simple deepfakes with GANs using existing images
Making direct changes to an existing image
Using Pix2PixHD to synthesize images
Fake videos and animations
Other AI deepfake technologies
Voice deepfakes
Deepfake detection
Using GANs in cyberattacks and offensive security
Evading face verification.
Compromising biometric authentication
Password cracking with GANs
Malware detection evasion
GANs in cryptography and stenography
Generating web attack payloads with GANs
Generating adversarial attack payloads
Securing GANs
GAN-assisted adversarial attacks
Deepfakes, malicious content, and misinformation
Chapter 13: LLM Foundations for Adversarial AI
A brief introduction to LLMs
Developing AI applications with LLMs
Hello LLM with Python
Hello LLM with LangChain
Bringing your own data
How LLMs change Adversarial AI
Chapter 14: Adversarial Attacks with Prompts
Adversarial inputs and prompt injection
Direct prompt injection
Prompt override
Style injection
Role-playing
Impersonation
Other jailbreaking techniques
Automated gradient-based prompt injection
Risks from bringing your own data
Indirect prompt injection
Data exfiltration with prompt injection
Privilege escalation with prompt injection
RCE with prompt injection
LLM platform defenses
Application-level defenses
Chapter 15: Poisoning Attacks and LLMs
Poisoning embeddings in RAG
Poisoning during embedding generation
Direct embeddings poisoning
Advanced embeddings poisoning
Query embeddings manipulation
Poisoning attacks on fine-tuning LLMs
Introduction to fine-tuning LLMs
Fine-tuning poisoning attack scenarios
Fine-tuning attack vectors
Poisoning ChatGPT 3.5 with fine-tuning
Defenses and mitigations against poisoning attacks in fine-tuning
Chapter 16: Advanced Generative AI Scenarios
Supply-chain attacks in LLMs
Publishing a poisoned LLM on Hugging Face
Publishing a tampered LLM on Hugging Face.
Other supply-chain risks for LLMs.
Show 258 more Contents items
ISBN
9781835088678 ((electronic bk.))
OCLC
1446416572
Statement on language in description
Princeton University Library aims to describe library materials in a manner that is respectful to the individuals and communities who create, use, and are represented in the collections we manage.
Read more...
Other views
Staff view
Ask a Question
Suggest a Correction
Report Harmful Language
Supplementary Information