Online Journal Of Public Health Informatics
explainable artificial intelligence, XAI, artificial intelligence, AI, AI medicine, pathology informatics, radiology informatics
Artificial intelligence (AI) applied to medicine offers immense promise, in addition to safety and regulatory concerns. Traditional AI produces a core algorithm result, typically without a measure of statistical confidence or an explanation of its biological-theoretical basis. Efforts are underway to develop explainable AI (XAI) algorithms that not only produce a result but also an explanation to support that result. Here we present a framework for classifying XAI algorithms applied to clinical medicine: An algorithm’s clinical scope is defined by whether the core algorithm output leads to observations (eg, tests, imaging, clinical evaluation), interventions (eg, procedures, medications), diagnoses, and prognostication. Explanations are classified by whether they provide empiric statistical information, association with a historical population or populations, or association with an established disease mechanism or mechanisms. XAI implementations can be classified based on whether algorithm training and validation took into account the actions of health care providers in response to the insights and explanations provided or whether training was performed using only the core algorithm output as the end point. Finally, communication modalities used to convey an XAI explanation can be used to classify algorithms and may affect clinical outcomes. This framework can be used when designing, evaluating, and comparing XAI algorithms applied to medicine.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
Gniadek, Thomas; Kang, Jason; Theparee, Talent; and Krive, Jacob, "Framework for Classifying Explainable Artificial Intelligence (XAI) Algorithms in Clinical Medicine" (2023). HPD Articles. 297.
©Thomas Gniadek, Jason Kang, Talent Theparee, Jacob Krive