8117  Reviews star_rate star_rate star_rate star_rate star_half

Introduction to Generative AI Engineering for LLMOps

This Introduction to Generative AI (GenAI) course teaches DevOps and ITOps professionals to master the deployment, management, and scaling of GenAI and Large Language Model (LLM) applications. Skills...

Read More
Course Code WA3511
Duration 3 days
Available Formats Classroom

This Introduction to Generative AI (GenAI) course teaches DevOps and ITOps professionals to master the deployment, management, and scaling of GenAI and Large Language Model (LLM) applications.

Skills Gained

  • Understand the infrastructure requirements and challenges associated with LLM deployment
  • Write effective prompts
  • Integrate LLMs into monitoring, alerting, and automation tools
  • Deploy and Manage Open-Source LLMs

Prerequisites

  • Practical Python programming and scripting for automation tasks (6+ months)
  • API call access and event stream handling
  • Exception handling, debugging, testing, and logging
  • Experience with containerization technologies (e.g., Docker) and orchestration platforms (e.g., Kubernetes)
  • Familiarity with CI/CD pipelines and tools, such as Jenkins, GitLab, or GitHub Actions
  • Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and their services
  • Experience with monitoring and logging tools, such as Prometheus, Grafana, and ELK stack (Elasticsearch, Logstash, Kibana) is recommended but not required
  • Machine Learning concepts recommended - classification, regression, clustering

Course Details

Outline

Introduction

LLM Fundamentals for Ops

  • Introduction to Generative AI and LLMs for Operations Workflows
  • LLM Architecture and Deployment Considerations
  • Implications of LLM architecture on deployment, scaling, and resource management

Prompt Engineering for Ops

  • Introduction to Prompt Engineering
  • Techniques for creating effective prompts
  • Best practices for prompt design and optimization
  • Developing prompts for IT and traditional Ops tasks
  • Log analysis
  • Alert generation
  • Incident response
  • Improving response to production outages and IT challenges with PE

LLM Integration for Ops

  • Overview of key LLM APIs and libraries
  • OpenAI API
  • HuggingFace Transformers
  • Strategies for integrating LLMs into monitoring, alerting, and automation tools
  • Use Case Development
  • Real-World Case Studies
  • Building an LLM-powered monitoring and alerting system

Deployment and Management of Open-Source LLMs

  • Introduction to Open-Source LLMs
  • Advantages and limitations in production environments
  • Best practices for deploying and managing open-source LLMs
  • Techniques for managing LLM infrastructure, scaling, and performance
  • Setting up Lllama 3 from HuggingFace

Conclusion