Generative AI Foundations in Python : Discover Key Techniques and Navigate Modern Challenges in LLMs / Carlos Rodriguez and Samira Shaikh.

Author
Rodriguez, Carlos, 1945- [Browse]
Format
Book
Language
English
Εdition
First edition.
Published/​Created
  • Birmingham, England : Packt Publishing, [2024]
  • ©2024
Description
1 online resource (190 pages)

Details

Subject(s)
Author
Summary note
Begin your generative AI journey with Python as you explore large language models, understand responsible generative AI practices, and apply your knowledge to real-world applications through guided tutorials Key Features Gain expertise in prompt engineering, LLM fine-tuning, and domain adaptation Use transformers-based LLMs and diffusion models to implement AI applications Discover strategies to optimize model performance, address ethical considerations, and build trust in AI systems Purchase of the print or Kindle book includes a free PDF eBook Book Description The intricacies and breadth of generative AI (GenAI) and large language models can sometimes eclipse their practical application. It is pivotal to understand the foundational concepts needed to implement generative AI. This guide explains the core concepts behind -of-the-art generative models by combining theory and hands-on application. Generative AI Foundations in Python begins by laying a foundational understanding, presenting the fundamentals of generative LLMs and their historical evolution, while also setting the stage for deeper exploration. You'll also understand how to apply generative LLMs in real-world applications. The book cuts through the complexity and offers actionable guidance on deploying and fine-tuning pre-trained language models with Python. Later, you'll delve into topics such as task-specific fine-tuning, domain adaptation, prompt engineering, quantitative evaluation, and responsible AI, focusing on how to effectively and responsibly use generative LLMs. By the end of this book, you'll be well-versed in applying generative AI capabilities to real-world problems, confidently navigating its enormous potential ethically and responsibly. What you will learn Discover the fundamentals of GenAI and its foundations in NLP Dissect foundational generative architectures including GANs, transformers, and diffusion models Find out how to fine-tune LLMs for specific NLP tasks Understand transfer learning and fine-tuning to facilitate domain adaptation, including fields such as finance Explore prompt engineering, including in-context learning, templatization, and rationalization through chain-of-thought and RAG Implement responsible practices with generative LLMs to minimize bias, toxicity, and other harmful outputs Who this book is for This book is for developers, data scientists, and machine learning engineers embarking on projects driven by generative AI. A general understanding of machine learning and deep learning, as well as some proficiency with Python, is expected.
Notes
  • Description based upon print version of record.
  • GPU configuration
Bibliographic references
Includes bibliographical references and index.
Source of description
  • Description based on publisher supplied metadata and other sources.
  • Description based on print version record.
Contents
  • Intro
  • Title Page
  • Copyright and Credits
  • Dedications
  • Foreword
  • Contributors
  • Table of Contents
  • Preface
  • Part 1: Foundations of Generative AI and the Evolution of Large Language Models
  • Chapter 1: Understanding Generative AI: An Introduction
  • Generative AI
  • Distinguishing generative AI from other AI models
  • Briefly surveying generative approaches
  • Clarifying misconceptions between discriminative and generative paradigms
  • Choosing the right paradigm
  • Looking back at the evolution of generative AI
  • Overview of traditional methods in NLP
  • Arrival and evolution of transformer-based models
  • Development and impact of GPT-4
  • Looking ahead at risks and implications
  • Introducing use cases of generative AI
  • The future of generative AI applications
  • Summary
  • References
  • Chapter 2: Surveying GenAI Types and Modes: An Overview of GANs, Diffusers, and Transformers
  • Understanding General Artificial Intelligence (GAI) Types - distinguishing features of GANs, diffusers, and transformers
  • Deconstructing GAI methods - exploring GANs, diffusers, and transformers
  • A closer look at GANs
  • A closer look at diffusion models
  • A closer look at generative transformers
  • Applying GAI models - image generation using GANs, diffusers, and transformers
  • Working with Jupyter Notebook and Google Colab
  • Stable diffusion transformer
  • Scoring with the CLIP model
  • Chapter 3: Tracing the Foundations of Natural Language Processing and the Impact of the Transformer
  • Early approaches in NLP
  • Advent of neural language models
  • Distributed representations
  • Transfer Learning
  • Advent of NNs in NLP
  • The emergence of the Transformer in advanced language models
  • Components of the transformer architecture
  • Sequence-to-sequence learning.
  • Evolving language models - the AR Transformer and its role in GenAI
  • Implementing the original Transformer
  • Data loading and preparation
  • Tokenization
  • Data tensorization
  • Dataset creation
  • Embeddings layer
  • Positional encoding
  • Multi-head self-attention
  • FFN
  • Encoder layer
  • Encoder
  • Decoder layer
  • Decoder
  • Complete transformer
  • Training function
  • Translation function
  • Main execution
  • Chapter 4: Applying Pretrained Generative Models: From Prototype to Production
  • Prototyping environments
  • Transitioning to production
  • Mapping features to production setup
  • Setting up a production-ready environment
  • Local development setup
  • Visual Studio Code
  • Project initialization
  • Docker setup
  • Requirements file
  • Application code
  • Creating a code repository
  • CI/CD setup
  • Model selection - choosing the right pretrained generative model
  • Meeting project objectives
  • Model size and computational complexity
  • Benchmarking
  • Updating the prototyping environment
  • GPU configuration
  • Loading pretrained models with LangChain
  • Setting up testing data
  • Quantitative metrics evaluation
  • Alignment with CLIP
  • Interpreting outcomes
  • Responsible AI considerations
  • Addressing and mitigating biases
  • Transparency and explainability
  • Final deployment
  • Testing and monitoring
  • Maintenance and reliability
  • Part 2: Practical Applications of Generative AI
  • Chapter 5: Fine-Tuning Generative Models for Specific Tasks
  • Foundation and relevance - an introduction to fine-tuning
  • PEFT
  • LoRA
  • AdaLoRA
  • In-context learning
  • Fine-tuning versus in-context learning
  • Practice project: Fine-tuning for Q&
  • A using PEFT
  • Background regarding question-answering fine-tuning
  • Implementation in Python
  • Evaluation of results
  • References.
  • Chapter 6: Understanding Domain Adaptation for Large Language Models
  • Demystifying domain adaptation - understanding its history and importance
  • Practice project: Transfer learning for the finance domain
  • Training methodologies for financial domain adaptation
  • Evaluation and outcome analysis - the ROUGE metric
  • Chapter 7: Mastering the Fundamentals of Prompt Engineering
  • The shift to prompt-based approaches
  • Basic prompting - guiding principles, types, and structures
  • Guiding principles for model interaction
  • Prompt elements and structure
  • Elevating prompts - iteration and influencing model behaviors
  • LLMs respond to emotional cues
  • Effect of personas
  • Situational prompting or role-play
  • Advanced prompting in action - few-shot learning and prompt chaining
  • Practice project: Implementing RAG with LlamaIndex using Python
  • Chapter 8: Addressing Ethical Considerations and Charting a Path Toward Trustworthy Generative AI
  • Ethical norms and values in the context of generative AI
  • Investigating and minimizing bias in generative LLMs and generative image models
  • Constrained generation and eliciting trustworthy outcomes
  • Constrained generation with fine-tuning
  • Constrained generation through prompt engineering
  • Understanding jailbreaking and harmful behaviors
  • Practice project: Minimizing harmful behaviors with filtering
  • Index
  • About Packt
  • Other Books You May Enjoy.
ISBN
9781835464915 ((electronic bk.))
OCLC
1443939992
Statement on language in description
Princeton University Library aims to describe library materials in a manner that is respectful to the individuals and communities who create, use, and are represented in the collections we manage. Read more...
Other views
Staff view

Supplementary Information