Health

ChatGPT for health care providers: Can the AI chatbot make the professionals’ jobs easier?

In addition to writing articles, songs and code in mere seconds, ChatGPT could potentially make its way into your doctor’s office — if it hasn’t already.

The artificial intelligence-based chatbot, released by OpenAI in December 2022, is a natural language processing (NLP) model that draws on information from the web to produce answers in a clear, conversational format.

While it’s not intended to be a source of personalized medical advice, patients are able to use ChatGPT to get information on diseases, medications and other health topics. 

CHATGPT AND HEALTH CARE: COULD THE AI CHATBOT CHANGE THE PATIENT EXPERIENCE?

Some experts even believe the technology could help physicians provide more efficient and thorough patient care.

Dr. Tinglong Dai, professor of operations management at the Johns Hopkins Carey Business School in Baltimore, Maryland, and an expert in artificial intelligence, said that large language models (LLMs) like ChatGPT have “upped the game” in medical AI.

Doctor using AI

Some experts believe that the ChatGPT artificial intelligence chatbot could help physicians provide more efficient and thorough patient care. (iStock)

“The AI we see in the hospital today is purpose-built and trained on data from specific disease states — it often can’t adapt to new scenarios and new situations, and can’t use medical knowledge bases or perform basic reasoning tasks,” he told Fox News Digital in an email.

“LLMs give us hope that general AI is possible in the world of health care.”

Clinical decision support

One potential use for ChatGPT is to provide clinical decision support to doctors and medical professionals, assisting them in selecting the appropriate treatment options for patients.

In a preliminary study from Vanderbilt University Medical Center, researchers analyzed the quality of 36 AI-generated suggestions and 29 human-generated suggestions regarding clinical decisions.

Out of the 20 highest-scoring responses, nine of them came from ChatGPT.

“The suggestions generated by AI were found to offer unique perspectives and were evaluated as highly understandable and relevant, with moderate usefulness, low acceptance, bias, inversion and redundancy,” the researchers wrote in the study findings, which were published in the National Library of Medicine.

Dai noted that doctors can enter medical records from a variety of sources and formats — including images, videos, audio recordings, emails and PDFs — into large language models like ChatGPT to get second opinions.

AI HEALTH CARE PLATFORM PREDICTS DIABETES WITH HIGH ACCURACY BUT ‘WON’T REPLACE PATIENT CARE’

“It also means that providers can build more efficient and effective patient messaging portals that understand what patients need and direct them to the most appropriate parties or respond to them with automated responses,” he added.

Dr. Justin Norden, a digital health and AI expert who is an adjunct professor at Stanford University in California, said he’s heard senior physicians say that ChatGPT is probably “as good or better” than most interns during their first year out of medical school.

Doctor on tablet

One potential use for ChatGPT is to provide clinical decision support to doctors and medical professionals, assisting them in selecting the appropriate treatment options for patients. (iStock)

“We’re seeing medical plans generated in seconds,” he told Fox News Digital in an interview. 

“These tools can be used to draw relevant information for a provider, to act as a sort of ‘co-pilot’ to help someone think through other things they could consider.”

Health education

Norden is especially excited about ChatGPT’s potential use for health education in a clinical setting.

“I think one of the amazing things about these tools is that you can take a body of information and transform what it looks like for many different audiences, languages and reading comprehension levels,” he said.

“Currently, ChatGPT has a very high risk of being ‘unacceptably wrong’ far too often.”

For example, ChatGPT could enable physicians to fully explain complex medical concepts and treatments to each patient in a way that’s digestible and easy to understand, said Norden.

“For example, after having a procedure, the patient could chat with that body of information and ask follow-up questions,” Norden said. 

Administrative tasks

The lowest-hanging fruit for using ChatGPT in health care, said Norden, is to streamline administrative tasks, which is a “huge time component” for medical providers.

In particular, he said some providers are looking to the chatbot to streamline medical notes and documentation.

“On the clinical side, people are already starting to experiment with GPT models to help with writing notes, drafting patient summaries, evaluating patient severity scores and finding clinical information quickly,” he said.

Patient discharge

Some experts believe that AI language models such as ChatGPT could potentially help streamline patient discharge instructions.  (iStock)

“Additionally, on the administrative side, it is being used for prior authorization, billing and coding, and analytics,” Norden added.

Two medical tech companies that have made significant headway into these applications are Doximity and Nuance, Norden pointed out.

Doximity, a professional medical network for physicians headquartered in San Francisco, launched its DocsGPT platform to help doctors write letters of medical necessity, denial appeals and other medical documents.

ARTIFICIAL INTELLIGENCE IN HEALTH CARE: NEW PRODUCT ACTS AS ‘COPILOT FOR DOCTORS’

Nuance, a Microsoft company based in Massachusetts that creates AI-powered health care solutions, is piloting its GPT4-enabled note-taking program. 

The plan is to start with a smaller subset of beta users and gradually roll out the system to its 500,000+ users, said Norden.

While he believes these types of tools are still in need of regulatory “guard rails,” he sees a big potential for this type of use, both inside and outside health care.

“If I have a big database or pile of documents, I can ask a natural question and start to pull out relevant pieces of information — large language models have shown they’re very good at that,” he said.

Patient discharges

The hospital discharge process involves many steps, including assessing the patient’s medical condition, identifying follow-up care, prescribing and explaining medications, providing lifestyle restrictions and more, according to Johns Hopkins.

AI language models like ChatGPT could potentially help streamline patient discharge instructions, Norden believes. 

AI VS. CANCER: MOUNT SINAI SCIENTIST SAYS BREAKTHROUGH TECH HAS ‘DRASTIC IMPACT’ ON DIAGNOSIS, TREATMENT

“This is incredibly important, especially for someone who has been in the hospital for a while,” he told Fox News Digital. 

Patients “might have lots of new medications, things they have to do and follow up on, and they’re often left with [a] few pieces of printed paper and that’s it.”

He added, “Giving someone far more information in a language that they understand, in a format they can continue to interact with, I think is really powerful.”

Privacy and accuracy cited as big risks 

While ChatGPT could potentially streamline routine health care tasks and increase providers’ access to vast amounts of medical data, it is not without risks, according to experts.

Dr. Tim O’Connell, the vice chair of medical informatics in the department of radiology at the University of British Columbia, said there is a serious privacy risk when users copy and paste patients’ clinical notes into a cloud-based service like ChatGPT. 

“We want medical AI software to be trustworthy.”

“Unlike ChatGPT, most clinical NLP solutions are deployed into a secure installation so that sensitive data is not shared with anyone outside the organization,” he told Fox News Digital. 

“Both Canada and Italy have announced that they are investigating OpenAI [ChatGPT’s parent corporation] to see if they are collecting or using personal information inappropriately.”

Additionally, O’Connell said the risk of ChatGPT generating false information could be dangerous. 

Health care providers generally categorize mistakes as “acceptably wrong” or “unacceptably wrong,” he said.

The ChatGPT logo on a smartphone

While ChatGPT could potentially streamline routine health care tasks and increase providers’ access to vast amounts of medical data, it’s not without risks, experts say. (Gabby Jones/Bloomberg via Getty Images)

“An example of ‘acceptably wrong’ would be for a system not to recognize a word because a care provider used an ambiguous acronym,” he explained. 

“An ‘unacceptably wrong’ situation would be where a system makes a mistake that any human — even one who is not a trained professional — would not make.”

“It is hard to see how a language generation engine can provide any such guarantees.”

This might mean making up diseases the patient never had — or having a chatbot become aggressive with a patient or give them bad advice that may harm them, said O’Connell, who is also CEO of Emtelligent, a Vancouver, British Columbia-based medical technology company that’s created an NLP engine for medical text.

CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER

“Currently, ChatGPT has a very high risk of being ‘unacceptably wrong’ far too often,” he added. “The fact that ChatGPT can invent facts that look plausible has been noted by many as one of the biggest problems with the use of this technology in health care.”

CLICK HERE TO GET THE FOX NEWS APP

“We want medical AI software to be trustworthy, and to provide answers that are explainable or can be verified to be true by the user, and produce output that is faithful to the facts without any bias,” he continued. 

“At the moment, ChatGPT does not yet do well on these measures, and it is hard to see how a language generation engine can provide any such guarantees.”

Related posts

AI blood test could detect Parkinson’s disease up to 7 years before symptoms: ‘Particularly promising’

Daily

Gender dysphoria and eating disorders have skyrocketed since pandemic, report reveals

Daily

Breast cancer mammogram screenings should start at age 40 instead of 50, says health task force

Daily

Leave a Comment