NVLogo wht bg v2
7878  Reviews star_rate star_rate star_rate star_rate star_half

Generative AI with Diffusion Models

Thanks to improvements in computing power and scientific theory, Generative AI is more accessible than ever before.Generative AI will play a significant role across industries and will gain...

Read More
$500 USD
Course Code NV-GEN-AI-DM
Duration 1 day
Available Formats Classroom

Thanks to improvements in computing power and scientific theory, Generative AI is more accessible than ever before.Generative AI will play a significant role across industries and will gain significant importance due to its numerous applications such as Creative Content Generation, Data Augmentation, Simulation and Planning, Anomaly Detection, Drug Discovery, and Personalized Recommendations etc. In this course we will take a deeper dive on denoising diffusion models, which are a popular choice for text-to-image pipelines, disrupting several industries.

Skills Gained

By participating in this workshop, you’ll learn:

  • Build a U-Net to generate images from pure noise
  • Improve the quality of generated images with the Denoising Diffusion process
  • Compare Denoising Diffusion Probabilistic Models (DDPMs) with Denoising Diffusion Implicit Models (DDIMs)
  • Control the image output with context embeddings
  • Generate images from English text-prompts using CLIP

Prerequisites

  • Good understanding of PyTorch
  • Good understanding of deep learning

Course Details

Workshop Outline

Introduction

From U-Nets to Diffusion

  • Build a U-Net, a type of autoencoder for images.
  • Learn about transposed convolution to increase the size of an image.
  • Learn about non-sequential neural networks and residual connections.
  • Experiment with feeding noise through the U-Net to generate new images

Control with Context

  • Learn how to alter the output of the diffusion process by adding context embeddings
  • Add additional model optimizations such as
  • Sinusoidal Position Embeddings
  • The GELU activation function
  • Attention

Text-to-Image with CLIP

  • Walk through the CLIP architecture to learn how it associates image embeddings with text embeddings
  • Use CLIP to train a text-to-image diffusion model

State-of-the-art Models

  • Review various state-of-the-art generative ai models and connect them to the concepts learned in class
  • Discuss prompt engineering and how to better influence the output of generative AI models
  • Learn about content authenticity and how to build trustworthy models

Final Review