Interpretation methods for machine learning models in the framework of survival analysis with censored data: a brief overview
Methods of interpretation, or explanation, of predictions are an integral part of modern black-box machine learning models. They have become widespread due to the need for the user to understand what the machine learning model is predicting. This is especially important for survival analysis models, as they are used in medicine, system reliability, safety, and also have features that make them difficult to explain and interpret. The paper discusses the main methods for interpreting survival models that deal with censored data and determine the characteristics of the time until a certain event. A feature of such models is that their predictions are presented not as a point value, but as a probabilistic function of time, for example, a survival function or a risk function. This requires the development of special interpretation methods. The most well-known methods SurvLIME, SurvLIME-KS, SurvNAM and SurvBeX, SurvSHAP(t) are considered, which are based on the use of LIME and SHAP interpretation methods, the Cox model and its modifications, as well as the Beran estimator.