ľֱ

'It Is Crazy!' The Promise and Potential Peril of ChatGPT

— The possibilities for this AI-guided bot in medicine are dazzling, as long as we keep it in check

Last Updated January 10, 2023
MedpageToday
A photo of the red eye of HAL 9000 from the movie 2001: A Space Odyssey.
  • author['full_name']

    Fred Pelzman is an associate professor of medicine at Weill Cornell, and has been a practicing internist for nearly 30 years. He is medical director of Weill Cornell Internal Medicine Associates.

"Have you guys tried this yet? It is crazy!!!"

This is the email that a colleague sent to me and a couple of others last week, under the subject line "ChatGPT". This is apparently the latest thing blowing up the Internet: that follows your instructions to create incredibly detailed text and more, that seems to border on the fantastical.

A quick search of the web showed multiple different streams from people who have been trying this out, demonstrating its incredible power and potential uses. Apparently (and I'm just beginning to understand this) it can take the simple written instructions and guidance you provide in a script, and turn this into amazingly detailed and accurate content in seconds.

Examples I saw during my brief look around were pretty impressive, and opened up a world of opportunities for its use in healthcare. People were using it, at least in demos, to write out their assessments and plans in patients' progress notes, to draft letters to insurance companies to get prior authorization for medications and other services, to craft elaborate condition-specific discharge instructions, operative reports, and much, much more.

Is this the start of what we've all been waiting for, the harnessing of the power of the Internet, artificial intelligence, and the amazing amount of information out there to make the lives of our patients better? To make the lives of doctors and everywhere else trying to take care of patients easier and better as well?

Think of the possibilities. The morning after seeing a bunch of patients the day before, we are all faced with a bloated in-basket, full of new lab and imaging results. Maybe soon I can run through these results in my in-basket and instruct some chatbot to "tell everybody I saw yesterday that their labs were normal, except for Mrs. Jones, who needs to get a renal sonogram and repeat her electrolytes in 1 week, and send a prescription for atorvastatin 10 mg to Mr. Smith's pharmacy and tell him to come in fasting for repeat lipids in 6 weeks and stop eating cheeseburgers."

Take it a step further. "Run through all my patients' charts, see who's due for their mammogram this year, and go ahead and order one and schedule it for them, and send them a portal message and a letter letting them know this has been set up."

This intelligent creature could be running in the background all the time, sensing patterns and missed opportunities, offering suggestions, noting trends -- maybe even one day becoming a brilliant diagnostician in and of itself. "Take a look at this patient's labs, and look at the differential diagnosis I've generated, and tell me what we're missing."

A lot of people are working on this to try and figure out ways to simplify a lot of the everyday tasks that we do, creating macros and bulk text items that can be inserted into charts and patient letters. Simple formulaic things, and ultimately more complex tasks, can ultimately turn into things that happen automatically.

I love that it can pull primary and secondary references from the literature and include these in letters to insurance companies, especially when they've denied our patients care that we believe is right for them. (However, verifying the references before the letter goes out is strongly advised, since the bot has been known to make up references.)

Someday, as these systems continue to improve, get more powerful, and increase their access to more and more information, it may be possible that they can start to see the forest for the trees, to be just as smart as we are, and to work alongside us to help us take care of patients. Think of a world where nothing falls through the cracks, and nothing is missed by our eyes or our ears.

The contents of our histories and physical exams and all the blood tests we do can be collected and synthesized by these digital assistants, working with us, helping make things better. Even put in between our patient's hearts and our ears, in-line in our stethoscope tubing, catching and interpreting subtle heart murmurs.

For now, I think it'll work really well to take over some of the generally boring and rote tasks that we all need to do that crush us under their weight, building better macros, patient education material and letters, and replies to portal messages. But the opportunities are out there for this kind of technology to do so much more, as long as we, the people who are actually doing the doctoring, continue to have a guiding hand in making sure this stuff stays on the right path.

Already there has been a lot of worry on the Internet about potential nefarious uses of this sort of stuff. Think of the 10th grade student telling the chat bot to write them an eight-page paper on the role of women on the battlefields during the Civil War, using a passionate yet gentle and authoritative voice, including lots of references. Or the PhD candidate asking for "a 100-page thesis on the use of the color green in religious art from the 16th through 19th centuries, make it sound authoritative, a little pompous, and in the tone of everything on my hard drive I've written before."

We need to make sure that we don't let this technology get a mind of its own, and that our patients always understand what's coming from us and what's coming from a machine. Come to think of it, maybe ChatGPT wrote this column, and I'm just finding out about it now.

"."

"I'm sorry, Dave. I'm afraid I can't do that."

Makes you wonder...

Correction: This column was corrected to indicate the name of the bot is ChatGPT, not ChatGBT.