RPGD: A Small-Batch Parallel Gradient Descent Optimizer with Explorative Resampling for Nonlinear Model Predictive Control

Frederik Heetmeyer,Marcin Paluch,Diego Bolliger,Florian Bolli,Xiang Deng,Ennio Filicicchia,Tobi Delbruck,Frederik Heetmeyer,Marcin Paluch,Diego Bolliger,Florian Bolli,Xiang Deng,Ennio Filicicchia,Tobi Delbruck

Nonlinear model predictive control often involves nonconvex optimization for which real-time control systems require fast and numerically stable solutions. This work proposes RPGD, a Resampling Parallel Gradient Descent optimizer designed to exploit small-batch parallelism of modern hardware like neural accelerators or multithreaded microcontrollers. After initialization, it continuously maintains...