An Effective Meaningful Way to Evaluate Survival Models

Shi-Ang Qi,u00a0Neeraj Kumar,u00a0Mahtab Farrokh,u00a0Weijie Sun,u00a0Li-Hao Kuan,u00a0Rajesh Ranganath,u00a0Ricardo Henao,u00a0Russell Greiner

One straightforward metric to evaluate a survival prediction model is based on the Mean Absolute Error (MAE) u2013 the average of the absolute difference between the time predicted by the model and the true event time, over all subjects. Unfortunately, this is challenging because, in practice, the test set includes (right) censored individuals, meaning we do not know when a censored individual actually experienced the event. In this paper, we explore various metrics to estimate MAE for survival datasets that include (many) censored individuals. Moreover, we introduce a novel and effective approach for generating realistic semi-synthetic survival datasets to facilitate the evaluation of metrics. Our findings, based on the analysis of the semi-synthetic datasets, reveal that our proposed metric (MAE using pseudo-observations) is able to rank models accurately based on their performance, and often closely matches the true MAE u2013 in particular, is better than several alternative methods.