Background on Dr. Gabriel:
Dr. Rodney A. Gabriel is a tenured Associate Professor of Anesthesiology and Associate Adjunct Professor in Biomedical Informatics at the University of California, San Diego. He is the Vice-Chair of Perioperative Informatics and the Medical Director at the Koman Outpatient Pavilion Ambulatory Surgery Center. He graduated medical school at UC San Francisco and anesthesiology residency at Harvard Medical School/Brigham & Women’s Hospital. He completed a fellowship in Regional Anesthesia and Acute Pain Medicine as well as an NIH fellowship in Biomedical Informatics at UC San Diego. Dr. Gabriel provides clinical care for surgical patients in the operating room as well as chronic pain patients in the outpatient setting. Clinically, he specializes in pain management using ultrasound-guided cryoneurolysis to treat chronic pain. He is actively engaged in research, in which he is actively funded by the Wellcome Leap and the National Institutes of Health. He also has various industry-sponsored grants related to pain medicine. His research focuses on leveraging artificial intelligence to process electronic health record and -omics data to tackle problems including the opioid addiction, persistent post-surgical pain and opioid use, critical care medicine, operating room efficiency, social determinants of health and how that affects healthcare, and surgical optimization.
When did you first become interested in artificial intelligence and how did you come to start using it?
Rodney became interested in artificial intelligence as an undergraduate student. He was at that time majoring in computer science and artificial intelligence courses were becoming available to students, largely focusing on traditional machine learning and older Ai strategies. Dr. Gabriel found his true calling in premed, but he remained “tech interested” and continued to carry his computer science skillset into graduate school and his residency program. “I got busy for a while during residency, but landed back on programming. I worked a lot in JavaScript designing mobile applications” for relevant use cases such as scheduling anesthesia. “When I started getting interested in database research, I became interested in statistics, which guided me to a fellowship in biomedical informatics heavy in machine learning and natural language processing for predictive analytics in perioperative medicine.” From there, his interest in Ai compounded to include aspects of deep learning and neural networks as he became more aware of what Ai was capable of in medicine.
What do you think about public-facing Ai tools, particularly large language models (LLMs), and the “democratization of Ai” to be used by “nonexperts”?
“There is good and bad, but this process is overall good. One of the good things about democratization of these tools is that it makes Ai scalable.” Where it used to take substantial experience to harness the power of, say, an LLM, which was likely built from scratch for a particular application, nonexperts can now participate in the process. This is a good thing, according to Dr. Gabriel, who believes that nonexperts can significantly contribute to diversity in the ideation process for Ai applications. “Learning Ai is now about learning to work with nonexperts as well, as a team, and learning how to adapt tools and models rather than building them from scratch.”
Do you subscribe to any consumer Ai tools, and, if so, how do you use them?
“I subscribe to ChatGPT and that’s it for consumer tool subscriptions. I am not a consumer of typical Ai products. I enjoy the artistic expression of writing my own papers, for example. But I explore all of the LLMs. They have both similarities and differences in their architecture and also somewhat different training sets.” Rodney was passionate when he described his use of Ai as strategic, and “always for research.” Dr. Gabriel does not always use typical Ai products in day-to-day tasks, however, he has found an effective niche for repetitive tasks in his own profession and how Ai can contribute to his creative and scientific processes. “I use GPT to create synthetic data, such as medical notes, to test my own theories and tools and to do research in new areas.”
What are your biggest concerns with the growing use of Ai tools in medicine?
“My biggest general concern is that people [in the medical field] will fully trust it without maintaining it.” Dr. Gabriel pointed out that many Ai applications are not “ready to go” in clinical settings, and that he has personally observed model degradation over time. “You have to be aware of changes in clinical practice that affect the model, and you have to track how the model needs to be adapted as processes change.” An important area for quality control and improvement are fairness and bias. “Transparency is key. And it takes time to test these models if you’re trying to show that they, at the very least, are not negatively affecting outcomes. Ideally they will be improving outcomes, but this takes time to measure.”
On the other side of the trust debate is the patient’s trust in the use of Ai tools for aspects of their care. One of Dr. Gabriel’s most recent grants for predicting mortality in ICU patients utilizes Ai-empowered tools, and his team is explicitly tackling this question by interviewing patients (participants), along with their family members and care providers explicitly on the use of Ai in the study. All tools and technologies in healthcare are under scrutiny, and Ai is no different in that it is a decision support tool, not a decision maker. “Clinicians are still responsible for their care decisions. Ai is a tool – understand it, learn to use it, learn from it, but don’t depend on it.”
“Trust, but verify.”
How do you QC information from Ai tools such as LLMs?
“In my particular work, it is important to always look into the ethical issues from using AI in medicine. When developing predictive models, we look at fairness and bias, which involves analyzing the error and accuracy of predictions between specific demographic groups. When we see differences, we can thoughtfully re-design our models to attempt to remedy any disparities. Researchers should explicitly look at these metrics and we prioritize tracking and reporting these metrics.”
Where do you think Ai has the greatest potential for impact in healthcare?
Speaking of broad and likely potential, Dr. Gabriel mentioned the organization and streamlining of EHR and EMR information, as well as medical billing and reimbursement. His preferred impact led to a beautiful definition of personalized medicine, “I’d like to see Ai make meaningful use of the insane amount of data that lives in each patient’s EHR. This is true personalized care, and there’s too much data not to miss things at this point. Is the patient trending towards opioid addiction, for instance? Are there other trends that we are missing in real time? This is real precision medicine, and it will help dramatically with the allocation of resources at the right time for any given time.”
Do you foresee the same acceleration in growth for Ai in medicine and healthcare as we see in other markets?
“There are certain specialties that will undoubtedly experience exponential growth, particularly image analysis – radiology, pathology – where the data type is right for these applications. Other fields may be slower but will still accelerate. Medicine tends to be behind in some of these ways, we cannot adapt as quickly as Google.”
“There are some specialties that require human connection. In my opinion, aspects of these specialties should not be ‘replaced’ by Ai, even if Ai is good at the task. There are studies that show Ai can respond with ‘more empathy’ than a clinician in some cases, for instance, but it is not human and that matters to the patient.”
What words of encouragement do you have for those who are leery of Ai and its growing footprint?
“Ai use is inevitable. I would encourage them to consider exactly which aspects of it they are scared or skeptical of, and start the conversation there. Understanding the architecture is key. The more they understand about how it works, the more the limitations become clear to you.”
Should Ai use be disclosed to patients regardless of what it is being used for in healthcare?
Yes. Always.
Do you have any other thoughts on Ai that you would like to share?
“Ethics is the primary concern with Ai adoption. The technology is either there or getting there fast, but the ethics of Ai models must be covered with these technological advancements so we can better use them, maintain them, and apply them. There are several ethical principles for Ai to consider and report, including, but not limited to,: (1) generalizability, (2) fairness and bias, (3) transparency, and (4) explainability. These must be standard practices for Ai to be used effectively in medicine.

Leave a comment