SØG - mellem flere end 8 millioner bøger:

Søg på: Titel, forfatter, forlag - gerne i kombination.
Eller blot på isbn, hvis du kender dette.

Viser: Quick Start Guide to Large Language Models - Strategies and Best Practices for ChatGPT, Embeddings, Fine-Tuning, and Multimodal AI

Quick Start Guide to Large Language Models - Strategies and Best Practices for ChatGPT, Embeddings, Fine-Tuning, and Multimodal AI, 2. udgave

Quick Start Guide to Large Language Models

Strategies and Best Practices for ChatGPT, Embeddings, Fine-Tuning, and Multimodal AI
Sinan Ozdemir
(2024)
Sprog: Engelsk
Addison Wesley Professional
368,00 kr.
ikke på lager, Bestil nu og få den leveret
om ca. 15 hverdage

Detaljer om varen

  • 2. Udgave
  • Paperback: 384 sider
  • Udgiver: Addison Wesley Professional (Oktober 2024)
  • ISBN: 9780135346563

The Practical, Step-by-Step Guide to Using LLMs at Scale in Projects and Products

Large Language Models (LLMs) like Llama 3, Claude 3, and the GPT family are demonstrating breathtaking capabilities, but their size and complexity have deterred many practitioners from applying them. In Quick Start Guide to Large Language Models, Second Edition, pioneering data scientist and AI entrepreneur Sinan Ozdemir clears away those obstacles and provides a guide to working with, integrating, and deploying LLMs to solve practical problems.

Ozdemir brings together all you need to get started, even if you have no direct experience with LLMs: step-by-step instructions, best practices, real-world case studies, and hands-on exercises. Along the way, he shares insights into LLMs' inner workings to help you optimize model choice, data formats, prompting, fine-tuning, performance, and much more. The resources on the companion website include sample datasets and up-to-date code for working with open- and closed-source LLMs such as those from OpenAI (GPT-4 and GPT-3.5), Google (BERT, T5, and Gemini), X (Grok), Anthropic (the Claude family), Cohere (the Command family), and Meta (BART and the LLaMA family).

  • Learn key concepts: pre-training, transfer learning, fine-tuning, attention, embeddings, tokenization, and more
  • Use APIs and Python to fine-tune and customize LLMs for your requirements
  • Build a complete neural/semantic information retrieval system and attach to conversational LLMs for building retrieval-augmented generation (RAG) chatbots and AI Agents
  • Master advanced prompt engineering techniques like output structuring, chain-of-thought prompting, and semantic few-shot prompting
  • Customize LLM embeddings to build a complete recommendation engine from scratch with user data that outperforms out-of-the-box embeddings from OpenAI
  • Construct and fine-tune multimodal Transformer architectures from scratch using open-source LLMs and large visual datasets
  • Align LLMs using Reinforcement Learning from Human and AI Feedback (RLHF/RLAIF) to build conversational agents from open models like Llama 3 and FLAN-T5
  • Deploy prompts and custom fine-tuned LLMs to the cloud with scalability and evaluation pipelines in mind
  • Diagnose and optimize LLMs for speed, memory, and performance with quantization, probing, benchmarking, and evaluation frameworks

"A refreshing and inspiring resource. Jam-packed with practical guidance and clear explanations that leave you smarter about this incredible new field."
--Pete Huang, author of The Neuron

Register your book for convenient access to downloads, updates, and/or corrections as they become available. See inside book for details.

Foreword xi Preface xiii Acknowledgments xix About the Author xxi
Part I: Introduction to Large Language Models 1
Chapter 1: Overview of Large Language Models 3 What Are Large Language Models? 4 Popular Modern LLMs 7 Applications of LLMs 25 Summary 31
Chapter 2: Semantic Search with LLMs 33 Introduction 33 The Task 34 Solution Overview 36 The Components 37 Putting It All Together 53 The Cost of Closed-Source Components 57 Summary 58
Chapter 3: First Steps with Prompt Engineering 59 Introduction 59 Prompt Engineering 59 Working with Prompts Across Models 70 Summary 74
Chapter 4: The AI Ecosystem: Putting the Pieces Together 75 Introduction 75 The Ever-Shifting Performance of Closed-Source AI 76 AI Reasoning versus Thinking 77 Case Study
1: Retrieval Augmented Generation 79 Case Study
2: Automated AI Agents 87 Conclusion 93
Part II: Getting the Most Out of LLMs 95
Chapter 5: Optimizing LLMs with Customized Fine-Tuning 97 Introduction 97 Transfer Learning and Fine-Tuning: A Primer 99 A Look at the OpenAI Fine-Tuning API 102 Preparing Custom Examples with the OpenAI CLI 104 Setting Up the OpenAI CLI 108 Our First Fine-Tuned LLM 109 Summary 119
Chapter 6: Advanced Prompt Engineering 121 Introduction 121 Prompt Injection Attacks 121 Input/Output Validation 123 Batch Prompting 126 Prompt Chaining 128 Case Study: How Good at Math Is AI? 135 Summary 145
Chapter 7: Customizing Embeddings and Model Architectures 147 Introduction 147 Case Study: Building a Recommendation System 148 Summary 166
Chapter 8: AI Alignment: First Principles 167 Introduction 167 Aligned to Whom and to What End? 167 Alignment as a Bias Mitigator 173 The Pillars of Alignment 176 Constitutional AI: A Step Toward Self-Alignment 195 Conclusion 198
Part III: Advanced LLM Usage 199
Chapter 9: Moving Beyond Foundation Models 201 Introduction 201 Case Study: Visual Q/A 201 Case Study: Reinforcement Learning from Feedback 218 Summary 228
Chapter 10: Advanced Open-Source LLM Fine-Tuning 229 Introduction 229 Example: Anime Genre Multilabel Classification with BERT 230 Example: LaTeX Generation with GPT2 244 Sinan's Attempt at Wise Yet Engaging Responses: SAWYER 248 Summary 271
Chapter 11: Moving LLMs into Production 275 Introduction 275 Deploying Closed-Source LLMs to Production 275 Deploying Open-Source LLMs to Production 276 Summary 297
Chapter 12: Evaluating LLMs 299 Introduction 299 Evaluating Generative Tasks 300 Evaluating Understanding Tasks 317 Conclusion 328 Keep Going! 329
Part IV: Appendices 331 Appendix A: LLM FAQs 333 Appendix B: LLM Glossary 339 Appendix C: LLM Application Archetypes 345 Index 349
De oplyste priser er inkl. moms

Polyteknisk Boghandel

har gennem mere end 50 år været studieboghandlen på DTU og en af Danmarks førende specialister i faglitteratur.

 

Vi lagerfører et bredt udvalg af bøger, ikke bare inden for videnskab og teknik, men også f.eks. ledelse, IT og meget andet.

Læs mere her


Fysisk eller digital bog?

Ud over trykte bøger tilbyder vi tre forskellige typer af digitale bøger:

 

Vital Source Bookshelf: En velfungerende ebogsplatform, hvor bogen downloades til din computer og/eller mobile enhed.

 

Du skal bruge den gratis Bookshelf software til at læse læse bøgerne - der er indbygget gode værktøjer til f.eks. søgning, overstregning, notetagning mv. I langt de fleste tilfælde vil du samtidig have en sideløbende 1825 dages online adgang. Læs mere om Vital Source bøger

 

Levering: I forbindelse med købet opretter du et login. Når du har installeret Bookshelf softwaren, logger du blot ind og din bog downloades automatisk.

 

 

Adobe ebog: Dette er Adobe DRM ebøger som downloades til din lokale computer eller mobil enhed.

 

For at læse bøgerne kræves særlig software, som understøtter denne type. Softwaren er gratis, men du bør sikre at du har rettigheder til installere software på den maskine du påtænker at anvende den på. Læs mere om Adobe DRM bøger

 

Levering: Et download link sendes pr email umiddelbart efter købet.

 


Ibog: Dette er en online bog som kan læses på udgiverens website. 

Der kræves ikke særlig software, bogen læses i en almindelig browser.

 

Levering: Vores medarbejder sender dig en adgangsnøgle pr email.

 

Vi gør opmærksom på at der ikke er retur/fortrydelsesret på digitale varer.