By Gina Shaw
Researchers are investigating the potential for pharmacists to use artificial intelligence and its subset machine learning at the bedside in a variety of scenarios, experts said at ASHP Pharmacy Futures 2024, in Portland, Ore.
These AI applications include predicting the risk for adverse drug events, fluid overload in the ICU and opioid overdoses. But beyond those specific uses, the speakers noted, pharmacists should endeavor to develop a broader understanding of how to use AI to solve clinical problems and evaluate clinical literature that addresses AI incorporation.
“For example, we are all excited about doing mortality prediction, particularly in the ICU,” said Andrea Sikora, PharmD, a clinical associate professor at the University of Georgia College of Pharmacy, in Athens, and critical care pharmacist specialist at Augusta University Medical Center. But these mortality prediction models aren’t always benchmarked against industry standard assessments of severity, illness and mortality prediction such as the Sequential Organ Failure
Assessment (SOFA) score, she continued. “It reminds me of the [cartoonist] Rube Goldberg machine—like a super fancy 25-step process to make a piece of toast. But you could also just push the button on the toaster. Does the fancy, cool widget actually perform better than the basic tool?”
When evaluating a new AI tool, Dr. Sikora recommended using your “ABCs” to rigorously assess a new AI tool for clinical relevance in the hospital setting:
• A. Alignment with clinical outcome of interest. “Does it actually affect what happens with the patient and what occurs in practice?” she asked.
• B. Benchmarked against existing standards. “What is already being used within that clinical space, such as the Tobin index, SOFA or APACHE II [Acute Physiology and Chronic Health Evaluation II], and how does your model compare with that?” Dr. Sikora asked.
• C. Clinical clarity and implementation. “Is this something that you could take to a bedside clinician and say, ‘I think this works and this is why it works,’ and they could see its face validity and agree?”
• S. Standardized reporting. “Can the tool’s developers tell you that they used a particular guideline, like STROBE [STrengthening the Reporting of OBservational Studies in Epidemiology], to say they did a good job of reporting their methodology?” she asked.
Dr. Sikora noted that relatively few of the studies exploring the potential of AI in clinical applications have been implemented. For example, a study published last year found that out of nearly 500 published articles on the topic of machine learning models for sepsis care, only two were randomized clinical investigations evaluating the benefit of such models to improve patient-centered outcomes (Crit Care Med 2023;51[8]:985-991).
“This means that most of what you read at this point is someone doing some fun things and some pre-processing, and that’s the end of it,” Dr. Sikora said. “It’s exciting, but it’s good to realize that this is where most of the work is right now, as opposed to actually implementing AI and showing differences in outcomes.”
Dr. Sikora reported no relevant financial disclosures.
{RELATED-HORIZONTAL}