“We need a pragmatic AI future in healthcare, not an academic one.”
ChatGPT 4o and author (ACC)
This manuscript from the British Medical Journal is a somewhat laudable but also disappointing perspective from the Fair Universal Traceable Usable Robust Explainable or FUTURE-AI consortium of 117 “interdisciplinary” experts from 50 countries that was founded in 2021. The vast majority of the authors are PhDs or non-clinical degrees (MSc, MEng, DPhil, or MA) with MDs comprising of only about 10-15% of the authors. This exceedingly imbalanced gathering of the experts immediately raises major concerns about the practical relevance in real-world clinical settings.
The initial premise of AI in healthcare is of course correct: despite major advances in medical artificial intelligence research, clinical adoption of emerging AI solutions remains challenging owing to limited trust and ethical concerns. These experts were gathered to define international consensus guidelines for trustworthy healthcare AI, but whoever were in the committee to invite experts very much underestimated the importance of the need to include more clinicians for this project (especially with this being published in a medical journal). The framework included 30 (thirty) detailed recommendations for building trustworthy and deployable AI and ironically the authors stated that this is emphasizing “multi-stakeholder collaboration”. In addition, the authors failed to discuss that AI adoption often faces real-world legal and financial constraints as well as different AI regulations in each country. There is also overemphasis on consensus at the expense of innovation and expediency. Lastly, academic AI leaders often recommend very unrealistic and complex implementation expectations for healthcare AI: one wonders just how many of these authors have actually spent significant time with clinicians in clinic or hospital settings recently to fully understand the imbroglio of healthcare in this era of AI.
Perhaps the authors missed the major reason why AI is not adopted widely in clinical medicine and healthcare around the world: clinicians as a group are often not actively included or engaged in the design and implementation process of AI projects (including this one) and are often an afterthought. It is of course very ironic that this work about AI adoption in healthcare neglected this very important aspect. Without a much bigger clinician representation and voice, this FUTURE-AI consensus is just another ivory tower rulebook designed for theoretical AI governance committees and groups to publish their works with little or no clinical impact or adoption.
Read the full article here