Impact Highlights
Built a full DDPM training and reverse-sampling loop from scratch.
Guided generation with text-conditioned CLIP losses.
Learned disentangled latent factors and autoregressive priors.
Mission Brief
A practical generative modeling lab covering DDPM diffusion, CLIP-guided sampling, beta-VAE disentanglement, and PixelCNN.
Built a full DDPM training and reverse-sampling loop from scratch.
Guided generation with text-conditioned CLIP losses.
Learned disentangled latent factors and autoregressive priors.
Implemented a DDPM with UNet denoiser, cosine scheduler, and reverse diffusion sampling.
Built CLIP-guided diffusion for text-driven MNIST generation.
Implemented beta-VAE for disentangled latent learning and PixelCNN with type A/B masked convolutions.
A visual narrative of noise collapsing into structured images through layered denoising waves.
Abstract diffusion process visualization, particles converging into geometric forms, deep black backdrop, cyan and magenta glows, subtle film grain, futuristic and elegant, no text
Fallback asset in use: /illustrations/generative-lab.svg