CCE Theses and Dissertations

Date of Award

2021

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

College of Computing and Engineering

Advisor

Sumitra Mukherjee

Committee Member

Michael J. Laszlo

Committee Member

Frank Mitropoulos

Keywords

clinical decision making, Healthcare Cost and Utilization Project (HCUP) Nationwide Readmissions Database (NRD), local interpretable model-agnostic explanations (LIME), machine learning models

Abstract

Despite machine learning models being increasingly used in medical decision-making and meeting classification predictive accuracy standards, they remain untrusted black-boxes due to decision-makers' lack of insight into their complex logic. Therefore, it is necessary to develop interpretable machine learning models that will engender trust in the knowledge they generate and contribute to clinical decision-makers intention to adopt them in the field.

The goal of this dissertation was to systematically investigate the applicability of interpretable model-agnostic methods to explain predictions of black-box machine learning models for medical decision-making. As proof of concept, this study addressed the problem of predicting the risk of emergency readmissions within 30 days of being discharged for heart failure patients. Using a benchmark data set, supervised classification models of differing complexity were trained to perform the prediction task. More specifically, Logistic Regression (LR), Random Forests (RF), Decision Trees (DT), and Gradient Boosting Machines (GBM) models were constructed using the Healthcare Cost and Utilization Project (HCUP) Nationwide Readmissions Database (NRD). The precision, recall, area under the ROC curve for each model were used to measure predictive accuracy. Local Interpretable Model-Agnostic Explanations (LIME) was used to generate explanations from the underlying trained models. LIME explanations were empirically evaluated using explanation stability and local fit (R2).

The results demonstrated that local explanations generated by LIME created better estimates for Decision Trees (DT) classifiers.

Share

COinS