Interpretable Machine Learning Models in Clinical Decision Support Systems
Gayatri Prakash
Introduction: Machine learning is broadly defined as a subset of AI that utilizes algorithms to learn from data and make predictions/decisions.1 It has recently started to be incorporated into clinical decision support systems which are used to assist providers in making informed decisions about patient care.2 However even though machine learning models can provide highly accurate predictions or decisions the inner workings of the model are difficult to interpret or oftentimes unknown to the end user making it a “black box”.3 This lack of transparency makes it difficult for providers to trust the result and recommend a treatment plan to patients. Currently there are a large variety of methods being developed to elucidate the inner workings of these algorithms, and create models that are explainable i.e. interpretable machine learning models. The purpose of this review was to identify the utility of interpretable machine learning models.
Methods: This was accomplished by using the search terms “explainable machine learning” and “explainable artificial intelligence” in conjunction with “electronic health records” and “Clinical Decision-Making” in PubMed for identification of literature.
Results: The papers found used two main types of interpretation: global interpretation and local interpretation. Global interpretation involves interpretation of the general patterns the model as a whole learned. Local interpretation involves interpretation of the patterns on a case-by-case basis i.e. in a patient specific manner. These interpretations are achieved through a variety of methods such as SHapley Additive exPlanations (so-called shapley values)4,5,6,7, fuzzy rule-based systems7, Deep Taylor Decompositions5, and others. Shapley values help evaluate how much each input contributes to a decision or the overall model output were by far the most predominant method in the literature for interpretable machines. This is likely because they can be applied to a wide variety of models and architectures from basic decision trees to complex neural networks, while other methods only apply to specific model or architecture types. Fuzzy Rule-Based Systems (for example) can only be used with classifiers that use decision trees as they interpret every step made along a decision tree.
Conclusions: Interpretable machine learning is still an area of active research; however standardization is necessary and this requires clearly defined parameters (extent of model transparency necessary, etc.). Shapley values may be the answer, or a singular standard approach may not be desirable at all as other interpretation techniques are specific to the algorithm architecture.
Works Cited:
- Yu, K. H., Beam, A. L., & Kohane, I. S. (2018). Artificial intelligence in healthcare. Nature biomedical engineering, 2(10), 719–731. https://doi.org/10.1038/s41551-018-0305-z El-Sappagh, S., Alonso, J. M., Islam, S. M. R., Sultan, A. M., & Kwak, K. S. (2021). A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease. Scientific reports, 11(1), 2660. https://doi.org/10.1038/s41598-021-82098-3
- Antoniadi AM, Du Y, Guendouz Y, Wei L, Mazo C, Becker BA, Mooney C. Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review. Applied Sciences. 2021; 11(11):5088. https://doi.org/10.3390/app11115088
- Topol E. J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature medicine, 25(1), 44–56. https://doi.org/10.1038/s41591-018-0300-7
- Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., & Cook, D. (2022). Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific reports, 12(1), 11734. https://doi.org/10.1038/s41598-022-15877-1
- Lauritsen, S. M., Kristensen, M., Olsen, M. V., Larsen, M. S., Lauritsen, K. M., Jørgensen, M. J., Lange, J., & Thiesson, B. (2020). Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications, 11(1), 3852. https://doi.org/10.1038/s41467-020-17431-x
- Liu, Y., Gao, J., Liu, J., Walline, J. H., Liu, X., Zhang, T., Wu, Y., Wu, J., Zhu, H., & Zhu, W. (2021). Development and validation of a practical machine-learning triage algorithm for the detection of patients in need of critical care in the emergency department. Scientific reports, 11(1), 24044. https://doi.org/10.1038/s41598-021-03104-2
- El-Sappagh, S., Alonso, J. M., Islam, S. M. R., Sultan, A. M., & Kwak, K. S. (2021). A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease. Scientific reports, 11(1), 2660. https://doi.org/10.1038/s41598-021-82098-3