A theory of continuous generative flow networks

Salem Lahlou,u00a0Tristan Deleu,u00a0Pablo Lemos,u00a0Dinghuai Zhang,u00a0Alexandra Volokhova,u00a0Alex Hernu00e1ndez-Garcu0131u0301a,u00a0Lena Nehale Ezzine,u00a0Yoshua Bengio,u00a0Nikolay Malkin

Generative flow networks (GFlowNets) are amortized variational inference algorithms that are trained to sample from unnormalized target distributions over compositional objects. A key limitation of GFlowNets until this time has been that they are restricted to discrete spaces. We present a theory for generalized GFlowNets, which encompasses both existing discrete GFlowNets and ones with continuous or hybrid state spaces, and perform experiments with two goals in mind. First, we illustrate critical points of the theory and the importance of various assumptions. Second, we empirically demonstrate how observations about discrete GFlowNets transfer to the continuous case and show strong results compared to non-GFlowNet baselines on several previously studied tasks. This work greatly widens the perspectives for the application of GFlowNets in probabilistic inference and various modeling settings.