Frontiers | Editorial: ML and AI Safety, Effectiveness and E

Frontiers | Editorial: ML and AI Safety, Effectiveness and Explainability in Healthcare


2Aifred Health, Inc., Montreal, QC, Canada
3Department of Information Science, Bar-Ilan University, Ramat Gan, Israel
4Department of Mathematics, Queens College (CUNY), New York City, NY, United States
Editorial on the Research Topic
The increasing performance of machine learning and artificial intelligence (ML/AI) models has led to them being encountered more frequently in daily life, including in clinical medicine (Bruckert et al.; Rosenfeld et al., 2021). While concerns about the opaque “black box” nature of ML/AI tools are not new, the need for practical solutions to the interpretability problem has become more pressing as ML/AI devices move from the laboratory, through regulatory processes that have yet to fully catch up to the state-of-the-art (Benrimoh et al., 2018a), and to the bedside. This special edition targets three key domains in which innovation and clearer best practices are required for the implementation of ML/AI approaches in healthcare: ensuring safety, demonstrating effectiveness, and providing explainability. Notably, the first two have long been staples in the evaluation of drugs and medical devices (i.e., in order to be approved for human use, products must prove that they are safe and effective—often compared to a reasonable comparator) (Spławiński and Kuźniar, 2004). The third requirement—that of explainability—appears to be unique to ML/AI, due to the challenge of explaining how models arrive at their increasingly accurate conclusions. Yet, upon closer examination, one might argue that the explainability criterion has been implied in the past: mechanisms of action of drugs and devices are generally described in their product documentation (Health Canada, 2014). However, this can be misleading. For instance, many drugs have known receptor binding profiles and putative mechanisms of actions, although the precise mechanisms by which they produce their effect remain unclear despite their widespread use in clinical practice. Prime examples of this are lithium (Shaldubina et al., 2001) and electroconvulsive therapy (Scott, 2011), both longstanding and highly effective treatments whose mechanisms of action remain controversial. Indeed, even the precise mechanism of general anesthesia is a subject of debate (Pleuvry, 2008). As such, we must consider a compromise-that of

Related Keywords

Germany , Canada , Netherlands , Israel , Amsterdam , Noord Holland , M Ali Heidelberg , M Weimer Heidelberg , , Clinical Decision Support System For Mental Health , Research Topic , Springer International Publishing , Technology In Applied Intelligence , Health Canada , Aifred Health , Comprehensible Artificial Intelligence , Mental Health , Building Intelligent , Recent Trends , Future Technology , Intraocular Lens , Big Data Analytics , Big Data , Electroconvulsive Therapy , Lithium Action , Ten Years , Clinical Trials , Active Control , Med Abstract , Crossref Full Text , ஜெர்மனி , கனடா , நெதர்லாந்து , இஸ்ரேல் , ஆம்ஸ்டர்டாம் , நூற்த் ஹாலண்ட் , ஆராய்ச்சி தலைப்பு , ஸ்பிரிங்கர் சர்வதேச வெளியீடு , ஆரோக்கியம் கனடா , மன ஆரோக்கியம் , சமீப போக்குகள் , எதிர்கால தொழில்நுட்பம் , பெரியது தகவல்கள் பகுப்பாய்வு , பெரியது தகவல்கள் , பத்து ஆண்டுகள் , மருத்துவ சோதனைகள் , செயலில் கட்டுப்பாடு , மேட் ,

© 2025 Vimarsana