EXPLAINING ADDITIVE WEIBULL MODEL PARAMETER ESTIMATION WITH XAI: A SHAP AND LIME ANALYSIS
DOI:
https://doi.org/10.7494/csci.2026.27.1.6921Abstract
Conventional machine learning models face limitations in conducting time-to-event analyses because of censoring issues. This study introduces a Deep Additive Weibull (DAW) model that utilizes deep learning techniques for the survival analysis of right-censored COVID-19 patient data. Also, we explore several methods for "opening the black box" of the DAW model, including local interpretable model-agnostic explanations (LIME) and shapley additive explanations (SHAP), to enhance model trustworthiness. The DAW model leverages neural networks for survival analysis, specifically to estimate survival probabilities for each patient using an autoencoder-based network. The DAW model achieved a concordance index of 0.9699 for training and 0.92339 for testing. Our findings show that the DAW model effectively captures nonlinearities and complex interactions. We also assessed the impact of specific features on the model's prediction, providing valuable insights. Both SHAP and LIME plots highlight similar features as important, such as pneumonia, diabetes, age and inmsupr, indicating consistent model behavior across different explanation methods. Moreover, we demonstrated that explainable machine learning (ML) can elucidate how models make prediction, which is crucial for increasing trust and adoption of innovative ML techniques in healthcare.
Downloads
References
[1] Aggarwal, C. C. (2018). Neural networks and deep learning (Vol. 10, No. 978, p. 3). Cham: Springer.
[2] Antolini, L., et al. (2005). A time-dependent discrimination index for survival data. Statistics in Medicine, 24(24): 3927-3944.
[3] Duckworth, C., Chmiel, F. P., Burns, D. K., Zlatev, Z. D., White, N. M., Daniels, T. W., ... & Boniface, M. J. (2021). Using explainable machine learning to characterise data drift and detect emergent health risks for emergency department admissions during COVID-19. Scientific Reports, 11(1), 23017.
[4] Gabbay, F., Bar-Lev, S., Montano, O., & Hadad, N. (2021). A LIME-based explainable machine learning model for predicting the severity level of COVID-19 diagnosed patients. Applied Sciences, 11(21), 10417.
[5] Gerds, T. A., & Schumacher, M. (2006). Consistent estimation of the expected Brier score in general survival models with right‐censored event times. Biometrical Journal, 48(6), 1029-1040.
[6] Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.
[7] Graf, E., et al. (1999). Assessment and comparison of prognostic classification schemes for survival data. Statistics in Medicine, 18(17‐18), 2529-2545.
[8] Harrell, F. E., Califf, R. M., Pryor, D. B., Lee, K. L., & Rosati, R. A. (1982). Evaluating the yield of medical tests. JAMA, 247(18), 2543-2546.
[9] Hu, C., Li, L., Li, Y., Wang, F., Hu, B., & Peng, Z. (2022). Explainable machine-learning model for prediction of in-hospital mortality in septic patients requiring intensive care unit readmission. Infectious Diseases and Therapy, 11(4), 1695-1713.
[10] Kramer, M. A. (1991). Nonlinear principal component analysis using autoassociative neural networks. AIChE Journal, 37(2), 233-243.
[11] Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30.
[12] Lundberg, S. M., Erion, G. G., & Lee, S. I. (2018). Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1802.03888.
[13] Lundberg, S. M., Nair, B., Vavilala, M. S., Horibe, M., Eisses, M. J., Adams, T., ... & Lee, S. I. (2017). Explainable machine learning predictions to help anesthesiologists prevent hypoxemia during surgery. BioRxiv, 206540.
[14] Moncada-Torres, A., van Maaren, M. C., Hendriks, M. P., Siesling, S., & Geleijnse, G. (2021). Explainable machine learning can outperform Cox regression predictions and provide insights in breast cancer survival. Scientific Reports, 11(1), 6968.
[15] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). "Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144).
[16] Rokach, L., Maimon, O., & Shmueli, E. (Eds.). (2023). Machine Learning for Data Science Handbook: Data Mining and Knowledge Discovery Handbook. Springer Nature.
[17] Sidey-Gibbons, J. A., & Sidey-Gibbons, C. J. (2019). Machine learning in medicine: a practical introduction. BMC Medical Research Methodology, 19, 1-18.
[18] Xie, M., & Lai, C. D. (1996). Reliability analysis using an additive Weibull model with bathtub-shaped failure rate function. Reliability Engineering & System Safety, 52(1), 87-93.
[19] Zhang, Z., Beck, M. W., Winkler, D. A., Huang, B., Sibanda, W., & Goyal, H. (2018). Opening the black box of neural networks: methods for interpreting neural network models in clinical applications. Annals of Translational Medicine, 6(11).
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Computer Science

This work is licensed under a Creative Commons Attribution 4.0 International License.