Zero-Shot Text-to-Image Generation

Aditya Ramesh,u00a0Mikhail Pavlov,u00a0Gabriel Goh,u00a0Scott Gray,u00a0Chelsea Voss,u00a0Alec Radford,u00a0Mark Chen,u00a0Ilya Sutskever

Text-to-image generation has traditionally focused on finding better modeling assumptions for training on a fixed dataset. These assumptions might involve complex architectures, auxiliary losses, or side information such as object part labels or segmentation masks supplied during training. We describe a simple approach for this task based on a transformer that autoregressively models the text and image tokens as a single stream of data. With sufficient data and scale, our approach is competitive with previous domain-specific models when evaluated in a zero-shot fashion.