‘Busy’ doctors are now being told by a leading medical group that it’s fine to use ChatGPT to reduce workload

ChatGPT was highly effective at summarizing new clinical trials and case reports, suggesting that busy physicians could rely on the AI ​​tool to learn about the latest developments in their field in a relatively short time.
Advertisement

A leading US medical body is encouraging stressed-out doctors to use ChatGPT to free up time.

A new study looked at how well the AI ​​model can interpret and summarize complex medical studies. Physicians are encouraged to read these to stay abreast of the latest research and treatment developments in their field.

Advertisement

They found that the chatbot was accurate 98 percent of the time, providing doctors with quick and accurate summaries of studies in a range of specialties, from cardiology and neurology to psychiatry and public health.

The American Academy of Family Physicians said the results showed ChatGPT “will likely be useful as a screening tool to help busy physicians and scientists.”

ChatGPT was highly effective at summarizing new clinical trials and case reports, suggesting that busy physicians could rely on the AI ​​tool to learn about the latest developments in their field in a relatively short time.

ChatGPT was highly effective at summarizing new clinical trials and case reports, suggesting that busy physicians could rely on the AI ​​tool to learn about the latest developments in their field in a relatively short time.

The platform was 72 percent accurate overall.  It was best at making a definitive diagnosis, with an accuracy of 77 percent.  Research has also shown that it can pass a medical licensing exam and be more empathetic than real doctors

The platform was 72 percent accurate overall.  It was best at making a definitive diagnosis, with an accuracy of 77 percent.  Research has also shown that it can pass a medical licensing exam and be more empathetic than real doctors

The platform was 72 percent accurate overall. It was best at making a definitive diagnosis, with an accuracy of 77 percent. Research has also shown that it can pass a medical licensing exam and be more empathetic than real doctors

The report comes as AI quietly creeps into healthcare. Two-thirds of doctors reportedly see benefits, with 38 percent of physicians According to a survey by the American Medical Association, they report that they already use it as part of their daily practice.

Renough 90 percent of hospital systems use AI in some form, up from 53 percent in the second half of 2019.

In the meantime, an estimate 63 percent of doctors experiencing symptoms of burnout in 2021, according to the AMA.

While the Covid pandemic exacerbated physician burnout, rates were still high, with about 55 percent of physicians reporting feeling burned out in 2014.

The hope is that AI technology will help alleviate the high rates of burnout that are causing the physician shortage.

Kansas physicians, affiliated with the American Academy of Family Physicians, assessed AI’s ability to analyze and summarize clinical reports from 14 medical journals, checking whether AI interpreted them correctly and could create accurate summaries that physicians can quickly use. could read and process time.

Serious inaccuracies were infrequent, suggesting that busy physicians could rely on AI-generated research summaries to learn about the latest techniques and developments in their field without sacrificing valuable time with patients.

Researchers said: ‘We conclude that because ChatGPT summaries were 70% shorter than summaries and were generally of high quality, high accuracy and low bias, they are likely to be useful as a screening tool to help busy physicians and scientists more quickly assess whether further reviewing an article is probably worth it.”

READ ALSO  Jalen Hurts given rare $16,391 fine by the NFL for nasty horse-collar tackle on Giants cornerback Adoree Jackson after throwing pick-six in Christmas Day Eagles victory

The doctors at the University of Kansas tested the ChatGPT-3.5 model, the type often used by the public, to determine whether it could summarize medical research abstracts and determine the relevance of these articles to different medical specialties.

They added ten articles to the AI’s language learning model, which is designed to understand, process and generate human language based on training on large amounts of textual data. The journals specialized in different health topics, such as cardiology, pulmonary medicine, public health and neurology.

They found that ChatGPT was able to produce high-quality, accurate, and unbiased summary summaries, despite being given a 125-word limit.

Only four of the 140 summaries created by ChatGPT contained serious inaccuracies. One of them left out a serious risk factor for a health problem: being a woman.

Another was due to a semantic misunderstanding by the machine model, while others were due to a misinterpretation of trial methods, such as whether they were double-blind.

The researchers said: ‘We conclude that ChatGPT summaries contain rare but important inaccuracies that prevent them from being considered a definitive source of truth.

‘Physicians are strongly cautioned not to rely solely on ChatGPT-based summaries to understand research methods and research results, especially in high-risk situations.’

Still, the majority of inaccuracies noted in 20 of the 140 articles were minor and largely related to ambiguous language. The inaccuracies were not significant enough to drastically change the intended message or conclusions of the text.

The healthcare industry and the general public have accepted AI in healthcare with some reservations, largely preferring to have a doctor present to verify ChatGPT's answers, diagnoses, and drug recommendations.

The healthcare industry and the general public have accepted AI in healthcare with some reservations, largely preferring to have a doctor present to verify ChatGPT's answers, diagnoses, and drug recommendations.

The healthcare industry and the general public have accepted AI in healthcare with some reservations, largely preferring to have a doctor present to verify ChatGPT’s answers, diagnoses, and drug recommendations.

All ten studies were published in 2022, which researchers did on purpose because AI models were trained on data published until 2021.

By introducing text that had not yet been used to train the AI ​​network, researchers would get the most organic responses from ChatGPT, without being contaminated by studies that came before them.

ChatGPT was asked to ‘self-reflect’ on the quality, accuracy and biases of the written research summaries.

Self-reflection is a powerful tool for AI language learning. It allows AI chatbots to evaluate their own performance on specific tasks, such as analyzing scientific studies by relying on complex algorithms, comparing the methodology with already established standards and using probability to measure uncertainty levels.

READ ALSO  Inside the life of Birmingham City boss Tony Mowbray – affectionately known as ‘Mogga’ – as the father-of-three  suddenly steps away from role to receive medical treatment

Keeping up with the latest developments in his field is one of the many responsibilities a doctor has. But the demands of their jobs, especially caring for their patients in a timely manner, often leave them without the time necessary to delve into academic studies and case reports.

There are concerns about inaccuracies in ChatGPT’s responses, which could put patients at risk if not monitored by trained doctors.

This is evident from a study presented last year at a conference of the American Society of Health-System Pharmacists almost three-quarters of ChatGPT responses to drug-related questions reviewed by pharmacists were found to be incorrect or incomplete.

At the same time, ChatGPT’s answers to medical questions were found to be both more empathetic and of higher quality than doctors’ answers 79 percent of the time by an external panel of physicians.

Public interest in AI in healthcare appears to be low, especially if doctors rely on it too heavily. A 2023 study by Pew Research Center found that 60 percent of Americans would be “uncomfortable” with it.

Meanwhile, 33 percent of people said this would lead to worse patient outcomes, while 27 percent said it would make no difference.

Time-saving measures are crucial for doctors to give them more time for the patients they care for. Doctors currently have about 13 to 24 minutes for each patient.

Other responsibilities related to patient billing, electronic health records, and scheduling are quickly taking up a larger portion of physicians’ time.

The average doctor spends almost nine hours per week administering it. Psychiatrists spent most of their time with 20 percent of their working weeks – followed by internists (17.3 percent) and family/GPs (17.3 percent).

The administrative workload is increasing measurable toll on American physicians, who were experiencing increasing burnouts even before the global pandemic. The Association of American Medical Colleges predicts a shortage of as many as 124,000 physicians by 2034, a staggering figure that many attribute to rising burnout rates.

Dr. Marilyn Heine, trustee of the American Medical Association, said, “AMA studies have shown that administrative burden and physician burnout are high and that they are linked.”

The latest findings have been published in the journal Annals of Family Medicine.

WATCH VIDEO

DOWNLOAD VIDEO

Advertisement