Course
digicode: AI300V
Operationalize Machine Learning & Gen AI Solutions – Flexible Training
AI-300
Course facts
Download as PDF- Managing the ML model lifecycle using AutoML, MLflow tracking, and the Responsible AI dashboard
- Optimizing model performance by defining a hyperparameter search space and running sweep jobs
- Automating MLOps workflows by building Azure ML pipelines and securing them with service principals and GitHub Actions
- Enforcing robust deployment practices using feature-based development, branch protection, and approval gates across GitHub environments
- Understanding Generative AI fundamentals, including use cases, model selection, and GenAIOps for application lifecycle management
- Applying version control to prompts as code assets, designing GitHub repositories for safe testing and deployment
- Designing Gen AI evaluation experiments using clear metrics, Git-based workflows, and rubrics for automated and human quality assurance
- Monitoring Gen AI apps for production readiness, tracking key metrics (latency, token usage), and setting up tracing to identify performance bottlenecks
1 Experiment with Azure Machine Learning
Learn how to find the best machine learning model with automated machine learning (AutoML), MLflow-tracked notebooks, and the Responsible AI dashboard.
2 Perform hyperparameter tuning with Azure Machine Learning
Learn how to perform hyperparameter tuning with a sweep job in Azure Machine Learning.
3 Run pipelines in Azure Machine Learning
Learn how to create and use components to build pipeline in Azure Machine Learning. Run and schedule Azure Machine Learning pipelines to automate machine learning workflows.
4 Trigger Azure Machine Learning jobs with GitHub Actions
Learn how to automate your machine learning workflows by using GitHub Actions.
5 Trigger GitHub Actions with feature-based development
Learn how to protect your main branch and how to trigger tasks in the machine learning workflow based on changes to the code.
6 Work with environments in GitHub Actions
Learn how to train, test, and deploy a machine learning model by using environments as part of your machine learning operations (MLOps) strategy.
7 Deploy a model with GitHub Actions
Learn how to automate and test model deployment with GitHub Actions and the Azure Machine Learning CLI (v2).
8 Plan and prepare a GenAIOps solution
Learn how to develop chat applications with language models using a code-first development approach. By developing generative AI apps code-first, you can create robust and reproducible flows that are integral for generative AI Operations, or GenAIOps.
9 Manage prompts for agents in Microsoft Foundry with GitHub
Learn how to manage AI prompts as versioned assets using GitHub. Apply software engineering best practices to create, test, and promote prompt versions used in Microsoft Foundry as part of a GenAIOps workflow.
10 Evaluate and optimize AI agents through structured experiments
Learn how to optimize AI agents through structured evaluation that transforms guesswork into evidence-based engineering decisions. You'll explore how to design evaluation experiments with clear metrics for quality, cost, and performance; organize experiments using Git-based workflows; create evaluation rubrics for consistent scoring; and compare results to make informed optimization decisions.
11 Automate AI evaluations with Microsoft Foundry and GitHub Actions
Learn how to implement automated evaluations for AI agent responses using Microsoft Foundry evaluators, create evaluation datasets from production data and synthetic generation, run batch evaluations with Python scripts, and integrate evaluation workflows into GitHub Actions for continuous quality assurance.
12 Monitor your generative AI application
Learn how to monitor the performance of your generative AI application using Microsoft Foundry. This module teaches you to track key metrics like latency and token usage to make informed, cost-effective deployment decisions.
13 Analyze and debug your generative AI app with tracing
Learn how to implement tracing in your generative AI applications using Microsoft Foundry and OpenTelemetry. This module teaches you to capture detailed execution flows, debug complex workflows, and understand application behavior for better reliability and optimization.
This course is intended for data scientists, machine learning engineers, and DevOps professionals who want to design and operate production-grade AI solutions on Azure.
It is suited for learners with experience in Python, a foundational understanding of machine learning concepts, and basic familiarity with DevOps practices such as source control, CI/CD, and command-line tools, who are preparing to implement MLOps and GenAIOps workflows using Azure-native services.
- Programming experience with Python or R
- Experience developing and training machine learning models
- Familiarity with basic Azure Machine Learning concepts
- Familiarity with Git version control workflows
- Experience with Microsoft Azure AI Foundry or similar AI development platforms