Multidimensional Shape Constraints

Maya Gupta,u00a0Erez Louidor,u00a0Oleksandr Mangylov,u00a0Nobu Morioka,u00a0Taman Narayan,u00a0Sen Zhao

We propose new multi-input shape constraints across four intuitive categories: complements, diminishers, dominance, and unimodality constraints. We show these shape constraints can be checked and even enforced when training machine-learned models for linear models, generalized additive models, and the nonlinear function class of multi-layer lattice models. Real-world experiments illustrate how the different shape constraints can be used to increase explainability and improve regularization, especially for non-IID train-test distribution shift.