April 10, 2024
In an arena where the smallest bit of data can change the course of an operation—and ultimately have a huge impact on patient outcomes—surgeons are taking a cue from medical imaging’s advancements in artificial intelligence (AI) to glean all the information they can get.
Keri A. Seymour, DO, MHS, FACS, FASMBS, a general and bariatric surgeon at Duke University Medical Center in Durham, North Carolina, saw an opportunity to optimize her patients’ success when she teamed up with a Duke radiologist who’s studying body composition and analysis from abdominal computed tomography (CT) scans.
Dr. Seymour, an associate professor of surgery at Duke University and director of research in the Division of Minimally Invasive Surgery, has conducted multiple studies on how metabolic factors affect patient outcomes, examining variables that influence the success of an operation and the patient’s postoperative progress.
“Treatments for obesity tend to focus on body mass index (BMI) as a way to standardize our evaluation of patients,” said Dr. Seymour, who also is President of the North Carolina Chapter of the ACS. “But that doesn’t really describe their body composition, and distribution of adipose tissue and muscle as well. Patients can have a significant amount of muscle and still have increased weight and a higher BMI.”
Bioelectrical impedance testing provides more comprehensive information, especially for measuring changes in body composition over time. “There is an interplay in patients’ metabolism and their pre- and post-op states,” she explained. “Visualizing that relationship is key to understanding their progress. What I’ve come to appreciate is that we can use medical imaging to evaluate their fat mass—to see if patients are losing not just fat but also muscle.”
Enter Kirti Magudia, MD, PhD, an assistant professor of radiology at Duke University investigating high-level applications of machine learning in radiology.
Drs. Magudia and Seymour are currently working on a study of how CT-based body composition analysis could help optimize the selection and management of bariatric surgery patients. Preliminary results suggest that bariatric surgery patients with low or very low food security have less skeletal muscle and higher subcutaneous fat compared with those who have food security. “Despite these differences, bariatric surgery outcomes were similar across both groups, suggesting its effectiveness in improving the health of patients with obesity, including those facing food insecurity,” they observed.
The two physicians soon learned that their individual collections of data, including routine CT scans, could be combined and mined for important insights on an individual patient, even beyond Dr. Magudia’s passion for CT-based body composition. “For example, hepatic arterial anatomy can have many vascular variants,” Dr. Magudia said. “I keep drilling into our radiology trainees that they need to report it. You never know when it’s going to be needed, even for routine surgeries, like cholecystectomy.”
AI tools could also aid in the identification of patients in the emergency department who need the most urgent imaging and surgical intervention, Dr. Magudia said. She further noted that she and Dr. Seymour had both been on call during the prior weekend shift. “Our goal was to find those CT scans that Dr. Seymour needs to know about, so that they could be acted upon—and were not buried under all the other radiology exams for patients with less urgent issues. That way, they could get to the OR as quickly as possible.”
That also means making sure patients get the right kind of imaging, giving the surgeon the most useful information. Deep learning models can help prefill recommendations for appropriate imaging tests, giving providers both a heads-up and a head start.
An Israeli study presented at the 2023 annual meeting of the Radiological Society of North America (RSNA), for example, found that ChatGPT can deliver recommendations1 for appropriate imaging tests that might be as reliable as the recommendations of the European Society of Radiology (ESR) iGuide. In their presentation, Mor Saban, PhD, and Shani Rosen, MSc, demonstrated that when ChatGPT is presented with clinical data about patient symptoms, it can generate suggestions to help clinicians select the imaging modality—X-ray, CT, ultrasound, magnetic resonance imaging, and beyond—that an experienced radiologist might recommend.
In that study, human experts evaluated the ChatGPT suggestions and found that up to 87% of them were medically accurate, when compared with those compiled in the ESR iGuide. And, as the authors noted, ChatGPT isn’t even specifically designed for medical tasks.
"We work on predicting what is happening in the next couple of seconds, or the next phase of an operation, in order to anticipate surgical risk."
To the uninitiated, statements like “AI can recommend medical imaging tests” might seem like the unsettling prelude to a scenario where physicians could be replaced by machines that lack the nuance of human insight. But understanding how tools like ChatGPT are trained—on the collective knowledge of humans—can shine light on the possibilities for maximizing human potential.
For example, ChatGPT is fed chunks of text called “tokens” that come from websites, books, articles, and other publicly available sources. By building a dataset from these tokens, the model learns to predict the words or phrases human experts would be likely to use given a particular context.1
In a clinical setting, having auto-filled suggestions could take some of the legwork out of initial evaluation—and even encourage more thorough documentation. In a scenario such as Dr. Magudia’s example, in which being aware of unusual hepatic arterial anatomy could be vital to perioperative planning, an AI tool could help ensure that information is documented, whether the radiologist in the previous clinical case thought it relevant to note or not.
Elizabeth Burnside, MD, MPH, a professor in the Department of Radiology at the University of Wisconsin-Madison, explained during an RSNA 2023 plenary session the differences between discriminative AI models and generative AI models, offering digestible analogies for what each can accomplish. While discriminative models are primarily used to classify existing data into predetermined outcomes of interest, generative models use algorithms to craft content, incorporating text and images based on the data that trained them.
As an example, a discriminative model could be trained on millions of images of cats and dogs to learn their differences and, when presented with a new image, accurately label it as a cat or dog, Dr. Burnside said. Generative models train on similar data, but in this context, they would then be tasked with generating an image of a new cat or dog.
In a radiology setting, discriminative AI tasks could include identifying cancer on a mammogram or finding a bleed on a neuroimaging study—or determining whether pneumonia seen on a chest X-ray is related to COVID-19 infection. A generative model might be employed to create a radiology report based on the images it receives, simulate disease progression in a body system, or create summaries for patients in lay language.
The accuracy and the generalizability of an algorithm is dependent not only on the amount of information it’s given, but also on the composition and diversity—including patient and surgeon characteristics—of the training data, said Jennifer A. Eckhoff, MD, from Massachusetts General Hospital in Boston.
Dr. Eckhoff, a senior resident at University Hospital Cologne in Germany, interrupted her residency in 2021 to start a postdoctoral fellowship at Mass General’s Surgical Artificial Intelligence and Innovation Laboratory (SAIIL). She’s now harnessing AI’s predictive qualities to assess risk from interoperative events.
“My research focus is on computer vision-based analysis of surgical video data—specifically intra-abdominal minimally invasive surgical data,” Dr. Eckhoff explained. “We work on predicting what is happening in the next couple of seconds, or the next phase of an operation, in order to anticipate surgical risk.”
Using video analysis, Dr. Eckhoff’s team examines the spatial and temporal relationships of the actions and tools that compose surgical workflow, using them to predict a surgeon’s next move. They train AI models to identify procedural steps on a granular level, down to tissue-to-tool interaction.
The next step is to integrate quantitative data from these video analyses alongside perioperative data to help predict patient-specific complications, readmissions, and oncological outcomes. One of SAIIL’s current projects, coincidentally, focuses on patients undergoing laparoscopic cholecystectomy.
Most AI applications in surgery are currently based on supervised machine learning models, which involve training an algorithm on labeled data, Dr. Eckhoff explained. “So an algorithm is provided with a certain video dataset, which might be labeled with respect to the critical view of safety and its three subcomponents,” she said, referring to visual criteria in a laparoscopic image—also known as Strasberg’s criteria—that let a surgeon know it’s safe to proceed with removing the gallbladder.
A challenge for AI-augmented surgery is building models that adequately integrate human knowledge and understanding. Dr. Eckhoff and her colleagues have proposed a novel approach to training the networks: incorporating a knowledge graph into the video analysis, to identify an algorithm’s “understanding” of surgical notions and its ability to acquire conceptual knowledge as it applies to the data.
Their research demonstrated that AI models are able to learn tasks such as verification of the critical view of safety, apply the Parkland grading scale, and recognize instrument-action-tissue triplets.2
“We’re going to be building our shared knowledge to create what we call a shared surgical consciousness, one that holds more knowledge than any single surgeon can acquire.”
The principal investigator on the SAIIL project, Ozanan R. Meireles, MD, FACS, has assumed a new role as the Duke University Department of Surgery’s inaugural vice-chair for innovation. Dr. Meireles joined Duke in January, bringing with him the collaborative efforts of SAIIL and the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Lab.
“By using the interaction between the surgeon and the machine to improve operational efficiency, the machines will get better over time,” Dr. Meireles said. “We’re going to be building our shared knowledge to create what we call a shared surgical consciousness, one that holds more knowledge than any single surgeon can acquire. That collective surgical consciousness can guide us away from complications and truly improve patient care.”
Drs. Meireles and Eckhoff both expressed their excitement about the Critical View of Safety (CVS) Challenge, endorsed by the Society of American Gastrointestinal and Endoscopic Surgeons. It’s a global initiative to generate a large, diverse, annotated dataset for assessing the CVS, and it encourages researchers to compete in developing AI algorithms for real-time CVS detection, enhancing surgical safety and potentially easing surgeons’ workloads.
“In our work, we very much focus on governance of surgical video data and AI as it is applied to surgery,” Dr. Eckhoff added. “We’re composing a framework for interdisciplinary and international collaboration, which is essential for assembling large datasets, with respect to internationally varying privacy and data management regulations.”
As Dr. Meireles explained, the CVS Challenge platform is designed to automatically de-identify all videos that contributors submit. “When you upload a video, you do it through a secure account, and there’s a data-sharing agreement explaining that the video will be de-identified. The platform strips all the metadata, and, if the camera comes out of the abdomen and there are images taken outside the body, it blurs them.”
He adds that, while privacy regulations vary in different parts of the world and there are special considerations for certain rare cases, this process for anonymizing data has been well received by participants across the globe who recognize that the ability to share surgical knowledge is essential for actionable research.
An analogy that’s often used in comparing AI versus human decision-making is that it’s akin to a self-driving car versus a human driver—the former hasn’t quite mastered complicated driving that benefits from the nuances of human experience. Dr. Meireles likens AI-assisted surgery to a human driver using GPS. He noted that drivers are more likely to follow a suggestion from a GPS—which mines collective data to predict the most efficient route—than they are from a human passenger.
Still, “if you’re using a navigation tool and it tells you to turn right or left, you could ignore it and just keep driving,” he said.
Which raises questions about accountability and communication: “As we’re going through this cultural transformation era through artificial intelligence, patients should understand that AI agents might be helping their physician make a decision—or even that their physician could be disagreeing with the AI. How are we going to be explaining that, and what’s the patient’s role in this?” Dr. Meireles asked.
If incorporating these steps into surgical workflow seems daunting, Dr. Magudia has a reminder for clinicians. “Around 30 years ago, most radiology exams were on physical film, and it took a lot of work and effort among vendors and clinical radiologists to get to where we are today with PACS (picture archiving and communication systems) and the DICOM (digital imaging and communications in medicine) standard imaging format. This has revolutionized the way radiology is practiced and allowed us to advance further.”
Dr. Seymour has begun conversations with other clinicians in her role as chief quality officer about using accessible data to reveal additional factors that contribute to a surgery’s success. “We’ve talked about surgical site infection management, looking at the information we already have in the operating room—anesthesia, patient temperature, the timing of antibiotics—all the things we can record and review to see if they’ll be predictive of patient outcomes.”
And incorporating those factors into an automated system can help surgeons better anticipate the course of their workflow. One of Dr. Meireles’s recent projects—again in laparoscopic cholecystectomies—involved an AI model that was trained to grade intraoperative difficulty via the Parkland grading scale from an initial view of the gallbladder.
The AI’s performance was comparable to that of a human surgeon in identifying the degree of gallbladder inflammation, which is predictive of intraoperative course.3 By quickly predicting how difficult a cholecystectomy will be—and how long it will take a surgeon to complete—this automated assessment could be useful for optimizing workflow in the operating room, the researchers stated.
The model also could help develop personalized feedback for surgeons and trainees, offering opportunities for them to perfect their technique.
Harnessing the potential of AI will naturally come with regulatory and data management responsibilities, Dr. Eckhoff noted. “That also entails involving different stakeholders, including patients, other operating room staff, computer scientists, industry representatives, and other medical specialties.”
Medical specialties like radiology and pathology have embraced AI at a particularly impressive pace, explained Dr. Eckhoff. Indeed, the RSNA annual meeting in November boasted nearly 400 sessions covering AI topics alone, and not just for clinical decision support. Presenters explored applications from opportunistic screening to patient-centered practice to creating a more egalitarian process for leadership selection.
“The impact that clinical societies have is unmatched, especially in the United States,” she said. “And we have a great opportunity to shape the perception of AI among clinicians in the future, demonstrating that we can use it as a tool, and how the umbrella term ‘AI’ can be divided into many different subsections and subdisciplines.”
Dr. Eckhoff said she is excited to see how AI will impact outcomes. “Each tool needs to be tested for clinical validity, but we’re not far from seeing how AI can really change the concept of surgical safety.”
Evonne Acevedo is a freelance writer.