In a notable demonstration of artificial intelligence (AI) in the healthcare sector, Dr. Christopher Sharp, chief medical information officer at Stanford Health Care, recently illustrated how AI tools are being integrated into medical practice during a routine check-up. This encounter highlighted both the advancements and challenges posed by AI in a traditionally conservative field.
Dr. Sharp initiated the visit by informing the patient about the technology he was employing, which uses AI to summarise conversations and assist in clinical note-taking. “Before we start, I want to just ask you a quick question,” he said while opening an app on his smartphone. His system recorded the consultation, a significant leap in the melding of technology and patient care. During the examination, he made verbal notes on health metrics like blood pressure, ensuring that the AI captured relevant data efficiently.
The adoption of AI in healthcare has gained considerable momentum in the past year, with millions of patients experiencing treatments facilitated by AI-driven tools that handle repetitive tasks. The intent is to alleviate stress on doctors, expedite patient care processes, and decrease the likelihood of errors. According to Epic Systems, the largest electronic health record provider in the United States, their AI tools are currently transcribing around 2.35 million patient visits and assisting in drafting 175,000 messages each month.
While this rapid integration suggests a promising new landscape, there are significant concerns regarding the reliability of AI-generated medical advice. Studies have revealed alarming inaccuracies, with reports indicating that AI, like ChatGPT, provided "inappropriate" responses to 20 percent of test questions and could reflect inherent biases present in medical practice. For instance, research highlighted that certain chatbots perpetuated racial biases, reinforcing problematic assumptions about pain tolerance across different demographics.
In this clinical demonstration, it became evident that while AI can enhance efficiency, it was not without its shortcomings. Dr. Sharp encountered an instance where the AI's drafted response to a patient query contained a recommendation he deemed inappropriate. He promptly edited it, underscoring the critical need for medical professionals to scrutinise AI outputs. “Clinically, I don’t agree with all the aspects of that answer,” he noted.
Furthermore, research by Stanford's Roxana Daneshjou has involved rigorous testing of AI systems, revealing that AI does not consistently provide accurate, safe responses. Her findings indicate that the 20 percent rate of problematic answers is not suitable for daily healthcare use. The growing inclination of patients to seek diagnoses via consumer chatbots may also complicate the situation, with discussions about the consequences of misinformation in health contexts.
Despite these challenges, many professionals, including Dr. Sharp, recognise the potential of AI to alleviate some administrative burdens. An increasing number of practitioners are adopting AI transcription in their practices, but studies have yielded mixed results regarding whether these tools improve clinicians' overall efficiency. The variations in outcomes raise questions about the true value of AI in enhancing productivity within a medical setting.
The integration of AI technologies, such as transcription and messaging assistance, brings both innovation and caution to the forefront of medical practice. Experts like Adam Rodman, an internal medicine doctor, express apprehension about the current state of AI, suggesting that while promising, it still has a way to go before it can reliably serve in high-stakes medical environments. "I do think this is one of those promising technologies, but it’s just not there yet," Rodman stated.
As the landscape of healthcare continues to evolve with AI, institutions are being urged to conduct ongoing research and develop safeguards to ensure that AI outputs remain effective and safe for patient care. The University of California, San Francisco, which recently implemented AI scribe software, is closely monitoring the level of human oversight in AI-generated documentation to gauge its long-term impacts. “If we see less editing happening, either the technology’s getting better or there’s a risk humans are becoming intellectually reliant on the tool,” cautioned Sara Murray, chief health AI officer at UCSF.
As AI technology continues to permeate various facets of healthcare, the balance between innovation and patient safety, efficiency, and accuracy remains a critical consideration for medical practitioners and institutions alike.
Source: Noah Wire Services