I don't understand the brouhaha about artificial intelligence (AI). It's artificial -- or augmented -- but in either case, it's not real. AI cannot replace clinicians. AI cannot practice clinical medicine or serve as a substitute for clinical decision-making, even if AI can humans on certain exams. When put to the real test -- for example, making utilization review decisions -- the error rate can be as high as .
Findings presented at the 2023 meeting of the American Society of Health-System Pharmacists showed that the AI chatbot ChatGPT provided information when asked about drugs, and in some cases invented references to support its answers. Researchers said the AI tool is not yet accurate enough to answer consumer or pharmacist questions. Of course it's not. AI is only as smart as the people who build it.
What do you expect from a decision tree programmed by an MBA and not an actual doctor? Or a large language model that is prone to fabricate or "" -- that is, confidently generate responses without backing data? If you try to find ChatGPT's sources through PubMed or a Google search you often strike out.
The fact is the U.S. healthcare industry has a long record of problematic AI use, including establishing algorithmic in patient care. In a recent that sought to assess ChatGPT's accuracy in providing educational information on epilepsy, ChatGPT provided correct but insufficient responses to 16 of 57 questions, and one response contained a mix of correct and incorrect information. Research involving medical questions in a wide range of specialties has suggested that, despite improvements, AI should not be relied on as a sole source of medical knowledge because it lacks reliability and can be ""
It seems axiomatic that the development and deployment of any AI system would require expert human oversight to minimize patient risks and ensure that clinical discretion is part of the operating system. AI systems must be developed to manage biases effectively, ensuring that they are non-discriminatory, transparent, and respect patients' rights. Healthcare companies relying on AI technology need to input the highest-quality data and monitor the outcomes of answers to queries.
What we need is more emotional intelligence (EI) to guide artificial intelligence.
EI is fundamental in human-centered care, where empathy, compassion, and effective communication are key. Emotional intelligence fosters empathetic patient-doctor relationships, which are fundamental to patient satisfaction and treatment adherence. Doctors with high EI can understand and manage their own emotions and those of their patients, facilitating effective communication and mutual understanding. EI is essential for managing stressful situations, making difficult decisions, and working collaboratively within healthcare teams.
Furthermore, EI plays a significant role in ethical decision-making, as it enables physicians to consider patients' emotions and perspectives when making treatment decisions. Because EI enhances the ability to identify, understand, and manage emotions in oneself and others, it is a crucial skill set that can significantly influence the quality of patient care, physician-patient relationships, and the overall healthcare experience.
AI lacks the ability to understand and respond to human emotions, a gap filled by EI. Despite the advanced capabilities of AI, it cannot replace the human touch in medicine. From the doctors' perspective, many still believe that touch makes important with patients.
Simon Spivack, MD, MPH, a pulmonologist affiliated with Albert Einstein College of Medicine and Montefiore Health System in New York, , "touch traverses the boundary between healer and patient. It tells patients that they are worthy of human contact ... While the process takes extra time, and we have precious little of it, I firmly believe it's the least we can do as healers -- and as fellow human beings."
Spivack further observed: "[I]n our increasingly technology-driven future, I am quite comfortable predicting that nothing -- not bureaucratic exigencies, nor virtual medical visits, nor robots controlled by artificial intelligence -- will substitute for this essential human-to-human connection."
Patients often need reassurance, empathy, and emotional support, especially when dealing with severe or chronic illnesses. These are aspects that AI, with its current capabilities, cannot offer. I'm reminded of Data on Star Trek: The Next Generation. Data is an artificially intelligent android who is capable of touch but lacks emotions. Nothing in Data's life is more important than his quest to become more human. However, when Data acquires the "emotion chip," it overloads his relays and eventually the chip has to be removed. Once artificial, always artificial.
Harvard medical educator Bernard Chang, MD, MMSc, : "[I]f the value that physicians of the future will bring to their AI-assisted in-person patient appointments is considered, it becomes clear that a thorough grounding in sensitive but effective history-taking, personally respectful and culturally humble education and counseling, and compassionate bedside manner will be more important than ever. Artificial intelligence may be able to engineer generically empathic prose, but the much more complex verbal and nonverbal patient-physician communication that characterizes the best clinical visits will likely elude it for some time."
In essence, AI and EI are not competing elements but complementary aspects in modern medical practice. While AI brings about efficiency, precision, and technological advancements, EI ensures empathetic patient interactions and effective communication. The ideal medical practice would leverage AI for tasks involving data analysis and prediction, while relying on EI for patient treatment and clinical decision-making, thereby ensuring quality and holistic patient care.
There was a reason Jean-Luc Picard was Captain of the USS Enterprise and Data was not.
Data had all the artificial intelligence he ever needed in his computer-like brain and the Enterprise's massive data banks, but ultimately it was Picard's intuitive and incisive decision-making that enabled the Enterprise crew to go where no one had gone before.
is a former Doximity Fellow, a member of the editorial board of the American Association for Physician Leadership, and an adjunct professor of psychiatry at the Lewis Katz School of Medicine at Temple University in Philadelphia. He is the author of .