Course
digicode: GAIPRO
Generative AI in Production
Learn about the different challenges that arise when productionizing generative AI-powered applications versus traditional ML.
Duration
1 day
Price
850.–
Course facts
- Describing the challenges in productionizing applications using generative AI
- Managing experimentation and evaluation for LLM-powered applications
- Productionizing LLM-powered applications
- Implementing logging and monitoring for LLM-powered applications
You will learn how to manage experimentation and tuning of your LLMs, then you will discuss how to deploy, test, and maintain your LLM-powered applications. Finally, you will discuss best practices for logging and monitoring your LLM-powered applications in production.
1 Introduction to Generative AI in Production
- AI System Demo: Coffee on Wheels
- Traditional MLOps vs. GenAIOps
- Generative AI Operations
- Components of an LLM System
- Understand generative AI operations
- Compare traditional MLOps and GenAIOps
- Analyze the components of an LLM system
2 Managing Experimentation
- Datasets and Prompt Engineering
- RAG and ReACT Architecture
- LLM Model Evaluation (metrics and framework)
- Tracking Experiments
- Experiment with datasets and prompt engineering.
- Utilize RAG and ReACT architecture.
- Evaluate LLM models.
- Track experiments
- Lab: Unit Testing Generative AI Applications
- Optional Lab: Generative AI with Vertex AI: Prompt Design
3 Productionizing Generative AI
- Deployment, packaging, and versioning (GenAIOps)
- Testing LLM systems (unit and integration)
- Maintenance and updates (operations)
- Prompt security and migration
- Deploy, package, and version models
- Test LLM systems
- Maintain and update LLM models
- Manage prompt security and migration
- Lab: Vertex AI Pipelines: Qwik Start
- Lab: Safeguarding with Vertex AI Gemini API
4 Logging and Monitoring for Production LLM Systems
- Cloud Logging
- Prompt versioning, evaluation, and generalization
- Monitoring for evaluation-serving skew
- Continuous validation
- Utilize Cloud Logging
- Version, evaluate, and generalize prompts
- Monitor for evaluation-serving skew
- Utilize continuous validation
- Lab: Vertex AI: Gemini Evaluations Playbook
- Optional Lab: Supervised Fine Tuning with Gemini for Question and Answering
Developers and machine learning engineers who wish to operationalize Gen AI-based applications
We recommend taking the following course or equivalent knowledge: