ľֱ

AI Can Do Whole Echo Reports, Improve Efficiency

— AI transformation of the way cardiology uses echo "is absolutely going to happen, expert says

MedpageToday

CHICAGO -- Artificial intelligence (AI) assistance for echocardiography is becoming better validated and proving mettle for clinical use.

AI-fed echocardiography videos were able to complete 18 tasks at once, classifying patients on measures ranging from severe aortic valve stenosis to left ventricular systolic dysfunction and valvular regurgitation with high accuracy with a median area under the receiver operating characteristic curve (AUROC) of 0.91, according to findings from the PanECHO study.

A simplified imaging protocol using just five views still had good accuracy with a median AUROC of 0.85 and normalized mean absolute error of 0.14, reported Gregory Holste, MSE, a PhD candidate at Yale School of Medicine in New Haven, Connecticut, at the American Heart Association (AHA) annual meeting.

"Echocardiography is really the perfect place to do AI," said study discussant David Ouyang, MD, of Cedars-Sinai Medical Center in Los Angeles. "It covers a full range of cardiac physiology and really is an opportunity to handle the Achilles heel, which is that for echocardiography there's incredible clinical variation and variability in the image quality that assesses or impacts how we assess the image."

Ouyang praised the large training dataset used -- 28,800 studies with 1.1 million videos taken from 2016 through 2022 at Yale-New Haven Hospital -- and that the AI used was open source.

"The scale of data is incredibly important in assessing the function or the performance of AI models, so this assess allows us to develop the best AI models," he said. Most existing AI models for use in echo have been task specific with limited focus and were developed on small datasets.

Open source means hospital systems can take the model and its architecture with the framework to deploy into practice. "Scaling as an operational tool will require hospitals to test this tool internally, deploy it internally," said co-presenter Rohan Khera, MD, also of Yale. "You still have to go to the FDA if you need clinical application and tie it to clinical use cases. But many of the operational aspects can be done in the internal system."

AI for screening tests makes sense, especially outside the U.S. in a resource-limited global health setting, the group noted.

AI transformation of the way cardiology uses echo "is absolutely going to happen," said session moderator Nicholas Mills, MBChB, PhD, of the University of Edinburgh in Scotland.

He pointed to findings from a second study from Japan presented at the AHA session, with external validation of a more limited AI-assisted workflow through randomization of four sonographers to the AI workflow or a standard workflow over 38 days.

It showed an increased number of echo studies completed per day with AI assistance (16.7 vs 14.1 examinations per day, P<0.001) as well as better mental and physical well-being reported by the sonographers.

Independent trials evaluating effectiveness in clinical practice for AI have been "very few and far between" and thus "really important," said Mills. "Three extra scans per day, that could be really important for healthcare systems particularly."

Nobuyuki Kagiyama, MD, PhD, of Juntendo University in Tokyo, who presented the Japanese study data, said his center has already implemented the AI assistance into routine clinical practice because of how much the sonographers in the lab liked the reduced fatigue and streamlined workflow with the software.

"Simple and, you know, repetitive tasks such as screening echo is definitely something that we want AI to replace," he said. "We do not want to be replaced by AI for everything, including detailed professional measurements and discussion about the treatment strategy or those really professional and clinically enriched tasks."

However, one concern is that AI can provide a very confident prediction from regression models without much transparency as to how it reached that conclusion or ability to double check, Ouyang noted. The PanECHO trial tried to get at that with external validation on data from two sites in California and by checking what the AI determined were the key views, but Ouyang noted more will be needed.

Disclosures

Holste disclosed no relationships with industry.

Mills disclosed relationships with Roche Diagnostics, Siemens Healthineers, Roche Diagnostics, and Abbott Laboratories, as well as past relationships with Psyros Laboratories and Abbott Diagnostics.

Kagiyama disclosed relationships with Novartis Japan, AMI, Bristol Myers Squibb (BMS), AstraZeneca, Nippon Boehringer Ingelheim, Otsuka Pharmaceutical, and Eli Lilly Japan K.K., as well as a past relationship with EchoNous.

Khera disclosed relationships with BMS and Novo Nordisk, being co-founder of Ensight-AI and Evidence2Health, and being a co-inventor on a number of U.S. patent applications.

Primary Source

American Heart Association

Holste G "PanEcho: Complete AI-enabled echocardiography interpretation with multi-task deep learning" AHA 2024.

Secondary Source

American Heart Association

Kagiyama N "Artificial Intelligence-based automated ECHOcardiographic measurements and the workflow of sonographers (AI-ECHO): Randomized crossover trial" AHA 2024.