Could Artificial Intelligence Help with Emergency Department Triage in the Future?
A new study by researchers from the University of California, San Francisco, is one of the few to use real clinical data to test the potential of artificial intelligence (AI) in healthcare. How did AI compare to a trained emergency professional? And could it soon replace them in deciding the urgency of treatment for individual cases?
Overburdened emergency departments
We are all familiar with the scenario: going to the emergency department often means long waits for treatment, largely due to the sheer number of people seeking care. The causes of this situation are complex, ranging from individual misuse of emergency services to a systemic shortage of general practitioners, leaving people without someone to turn to when health problems arise.
AI could, in the future, help free up the hands of healthcare professionals. And this isn’t a trivial role—triage, or determining the urgency of a patient’s clinical condition, is one of the fundamental processes of medical decision-making, especially in emergency settings.
While the use of AI wouldn’t solve the issue of overcrowded emergency departments, it could, in combination with other measures, help streamline operations in these settings. By resolving the dilemma of which of two similar cases to prioritise, AI could contribute to more efficient workflows. Such scenarios are likely to become increasingly common with an ageing population.
AI has drawn attention in many fields, including healthcare, but studies evaluating its use directly in clinical settings have been scarce. Most research relies on simulated scenarios.
How does AI compare to professionals?
A recent study published on the JAMA Network Open platform sought to answer whether AI is as effective as doctors in prioritising patients arriving at the emergency department. The study utilised records from 251,401 visits to an adult emergency department at a university hospital in San Francisco. The data analysis assessed how well an AI model could extract symptoms from clinical notes and use them to determine case urgency.
In the first step, patient data was anonymised for study purposes. A total of 10,000 patient pairs were created, each comprising one individual in a more severe condition (e.g., showing signs of a stroke) and another with a less severe condition (e.g., a wrist fracture).
To compare the severity of their conditions, researchers used the latest version of a large language model, ChatGPT-4. The AI’s results were then compared to the Emergency Severity Index (ESI) of the individual cases. The ESI is a triage algorithm that categorises patients into five levels based on the urgency of care needed (immediate, emergent, urgent, semi-urgent, and non-urgent). It is the most commonly used triage scale, first developed in 1998, with its latest version published last year.
In the second part of the research, a sub-sample of 500 pairs was created. Both a physician and AI assessed the severity of their conditions, allowing a comparison of the two methods' success rates.
AI’s road to emergency departments is still long
In the first part of the research, where AI independently assessed the severity of cases in a large sample of paired cases, its success rate was 89%. When both AI and a physician evaluated severity, their success rates were comparable (88% for AI vs. 86% for the physician). This indicates that a large language model can assess the severity of patients’ conditions, provided it has sufficient data from their emergency department records, at a level similar to that of a professional.
Despite the promising results, enthusiasm must be tempered. While AI performed well in this task, its application in clinical environments requires further consideration. Previous research has shown that training on real-world data perpetuates racial and gender biases. These "side effects" of AI have not yet been eliminated. Before AI can be deployed in emergency departments, its results must be validated by additional studies and subjected to rigorous clinical testing.
The study was unique not only for using anonymised clinical data but also for the sheer volume of data it processed. It is the first study to test AI’s capabilities with more than 1,000 clinical cases. Additionally, it was the first to use data collected from emergency departments, representing a wide range of medical conditions.
Editorial Team, Medscope.pro
Sources:
- Williams C. Y. K., Zack T., Miao B. Y. et al. Use of a large language model to assess clinical acuity of adults in the emergency department. JAMA Netw Open 2024; 7 (5): e248895, doi: 10.1001/jamanetworkopen.2024.8895.
- Berthold J. Emergency department packed to the gills? Someday, AI may help. University of California in San Francisco, 2024 May 7. Available from: www.ucsf.edu/news/2024/05/427521/emergency-department-packed-gills-someday-ai-may-help
Did you like this article? Would you like to comment on it? Write to us. We are interested in your opinion. We will not publish it, but we will gladly answer you.
News from the world of medicine
All conferences
Popular this week
- A Cap Instead of a Brain Implant?
- Could Artificial Intelligence Help with Emergency Department Triage in the Future?
- AI Can Provide Surgeons with Valuable Data and Real-Time Feedback
- New method for distinguishing tumour tissue could improve glioblastoma resection
- Fluoroscopically Calibrated 3D-Printed Tools Increase Osteotomy Precision During Tumor Resection Near Joints