Back to Projects
Diffusion particles converging into generated forms

Mission Brief

Deep Generative Modeling

A practical generative modeling lab covering DDPM diffusion, CLIP-guided sampling, beta-VAE disentanglement, and PixelCNN.

PyTorchDDPMCLIPVAEGANPixelCNN

Impact Highlights

  • Built a full DDPM training and reverse-sampling loop from scratch.

  • Guided generation with text-conditioned CLIP losses.

  • Learned disentangled latent factors and autoregressive priors.

Build Notes

  • Implemented a DDPM with UNet denoiser, cosine scheduler, and reverse diffusion sampling.

  • Built CLIP-guided diffusion for text-driven MNIST generation.

  • Implemented beta-VAE for disentangled latent learning and PixelCNN with type A/B masked convolutions.

Image Direction

Recommended Concept

A visual narrative of noise collapsing into structured images through layered denoising waves.

Text-to-Image Prompt

Abstract diffusion process visualization, particles converging into geometric forms, deep black backdrop, cyan and magenta glows, subtle film grain, futuristic and elegant, no text

Fallback asset in use: /illustrations/generative-lab.svg