The Unreasonable Effectiveness of Few-shot Learning for Machine Translation

Xavier Garcia,u00a0Yamini Bansal,u00a0Colin Cherry,u00a0George Foster,u00a0Maxim Krikun,u00a0Melvin Johnson,u00a0Orhan Firat

We demonstrate the potential of few-shot translation systems, trained with unpaired language data, for both high and low-resource language pairs. We show that with only 5 examples of high-quality translation data shown at inference, a transformer decoder-only model trained solely with self-supervised learning, is able to match specialized supervised state-of-the-art models as well as more general commercial translation systems. In particular, we outperform the best performing system on the WMTu201921 English-Chinese news translation task by only using five examples of English-Chinese parallel data at inference. Furthermore, the resulting models are two orders of magnitude smaller than state-of-the-art language models. We then analyze the factors which impact the performance of few-shot translation systems, and highlight that the quality of the few-shot demonstrations heavily determines the quality of the translations generated by our models. Finally, we show that the few-shot paradigm also provides a way to control certain attributes of the translation u2014 we show that we are able to control for regional varieties and formality using only a five examples at inference, paving the way towards controllable machine translation systems.