Six Interdisciplinary Projects Receive Support from ASSET Center Seed Grants in Trustworthy AI Research for Medicine

Photo from Penn Engineering Today blog

The ASSET Center at Penn Engineering aims to make AI-enabled systems more “safe, explainable and trustworthy.” AI can have a transformative impact on the broad area of health and medical care, but concerns related to safety and trust are obstacles to wider adoption. One of the ways to meet this challenge is through collaboration of ASSET researchers with Penn’s Perelman School of Medicine (PSOM) and the Penn Institute for Biomedical Informatics (IBI) in a series of seed grants that fund research at the intersection of AI and health care. 

 

The funded projects aim to deploy AI tools in revolutionary ways to help healthcare practitioners, their patients and the general health and well-being of people. These projects, such as one aiming to use AI to detect the human brain’s neuromuscular control patterns, have implications beyond their initial applications.

 

As AI enters our doctors’ offices in more complex and technical ways, the interdisciplinary projects listed below will become increasingly important as initial testing grounds for future technology. They may also help to reduce the current hesitation around integrating AI into health care by not only producing high-performing AI tools, but also by designing their tools with transparency and explainability at their core. 

 

“Researchers in machine learning are focused on advancing the core AI technologies, but their lack of understanding of domain knowledge can lead to solutions that are not clinically useful,” says Rajeev Alur, Zisman Family Professor in Computer and Information Science and Founding Director of the ASSET Center. “Researchers in medicine, on the other hand, are focused on innovative applications of AI to clinical problems, but may run into fundamental obstacles related to explainability and trust. We believe that truly transformative research in AI-driven medicine requires both a focus on clinically relevant problems and foundational advances in trustworthy AI. This funding program is aimed at jump-starting impactful, collaborative research for this purpose. The range of clinical problems covered by the selected projects is truly impressive, and we look forward to the results of this research.”

 

Details on each of the six funded projects are listed below:

 

From Thought to Action: Deciphering Neuromuscular Control via Non-Invasive Biosignal Tracking for Real-Time Personalized Robotic Assistants

Nadia Figueroa, Shalini and Rajeev Misra Presidential Assistant Professor, Department of Mechanical Engineering and Applied Mechanics, Penn Engineering

Flavia Vitale, Assistant Professor, Departments of Neurology and Bioengineering, Penn Medicine and Penn Engineering

Ruzena Bajcsy, Professor Emeritus, Department of Computer and Information Science, Penn Engineering

This project aims to understand the human neuromuscular system dynamics; i.e., how the brain tells muscles to move. Inspired by advancements in implantable brain-spine interfaces such as NeuroLink, the team looks for a non-invasive way to track these signals in the human body and then develop real-time AI-driven intention detection models for personal assistant robots that behave more like humans. This interdisciplinary endeavor, requiring collaboration in neuroscience, biomechanics and control, is supported by the combined expertise of the research team, and has the potential to make significant impacts in health care and robotics.

 

Trustworthy Decision Support for Laboratory Test Ordering in Primary Care

Daniel Herman, Assistant Professor, Department of Pathology and Laboratory Medicine, Penn Medicine

Hamed Hassani, Associate Professor, Department of Electrical and Systems Engineering, Penn Engineering

George Pappas, UPS Foundation Professor of Transportation, Department of Electrical and Systems Engineering, Penn Engineering

This project aims to create reliable, actionable decision support recommendations for clinical laboratory test orders. The team will develop a large language model-based approach to predict orders for the most common laboratory tests based on past clinical notes and data for specific patients. To ensure that recommendations are reliable, the team will develop novel approaches to quantify uncertainty, which will allow for primary care clinicians to decide whether or not to perform the AI-generated recommendation. These new methodologies should be generalizable to other high-risk use cases for large language models, thus having an impact that extends into additional health care and other industry applications.

 

Interpretable and Extensible Patient-Provider Clinical Interaction Analysis for Frailty Detection with Compositional Reasoning and Vision-Language Models

Kevin B. Johnson, David L. Cohen University Professor, Departments of Biostatistics, Epidemiology and Informatics; and Computer and Information Science, Penn Medicine and Penn Engineering

Kyra O’Brien, Assistant Professor, Department of Neurology, Penn Medicine

Eric Eaton, Research Associate Professor, Department of Computer and Information Science, Penn Engineering

Generative AI and deep learning may improve clinical video analysis for health care, specifically in helping to provide real-time natural language processing, patient-provider interaction understanding, and video feature detection. However, these tasks need to be done using an interpretable model to allow sufficient trust in the accuracy of the analysis. This project aims to develop a trustworthy, interpretable system to capture, process and interpret the nuances of communication and non-verbal cues between patients and providers during medical consultations. These cues will be used to assess cognitive impairment toward diagnosing age-related frailty. To accomplish this goal, the team will build upon recent advances in video-language model learning to identify the relevant events of interest for frailty in patient-provider interactions. They will then use their model to develop a system to robustly identify those events in an interpretable, easily extensible and adaptable manner for varying clinical contexts.

 

Multimodal Knowledge Bottlenecks for Robust Radiograph Interpretation

Mark Yatskar, Assistant Professor, Computer and Information Science, Penn Engineering

James Gee, Professor, Department of Radiology, Penn Medicine

Machine learning-based systems for radiographic scans are often confounded and therefore have been documented to struggle in transfer between hospitals or suffer catastrophic failures under slight distribution shifts. The main goal of this project is to create models for analyzing radiographs that are robust to distribution shifts so that systems can be more safely deployed. The research team aims to achieve this by creating high-performance, inherently interpretable systems where factors contributing to recognition are aligned with physician knowledge. Specifically, the team will develop a new class of concept bottleneck systems where physician knowledge is distilled from resources (such as PubMed) via large language models, and automatically compiled as identifiers in a larger system. Target problems will include high-stakes 2D visual recognition domains such as analyses of chest radiographs for COVID-19, pneumonia or cancer.

 

Preventing Complications with Transparent Surgical AI Assistants

Eric Wong, Assistant Professor, Department of Computer and Information Science, Penn Engineering

Daniel A. Hashimoto, Assistant Professor, Department of Surgery, Penn Medicine

Safety predictions from surgical AI assistants can potentially mitigate complications that arise from surgical mistakes, such as bile duct injury in gall bladder removals. However, expert surgeons are hesitant to trust these systems due to a lack of explanation, and consequently ignore predictions from the AI. This project aims to develop transparent AI assistants that can verifiably explain the reasoning why areas are safe or unsafe in a way that is aligned with surgical knowledge.

 

Trustworthy AI for Continuous Monitoring of Grave’s Disease with Mobile Devices

Mingmin Zhao, Assistant Professor, Department of Computer and Information Science, Penn Engineering

Oleg Sokolsky, Research Professor, Department of Computer and Information Science, Penn Engineering

Lama Al-Aswad, Professor of Ophthalmology, Irene Heinz Given and John La Porte Given Research Professor of Ophthalmology II, Scheie Eye Institute, Department of Ophthalmology University of Pennsylvania Perelman School of Medicine

Graves’ disease, marked by its prevalent but challenging-to-monitor symptom of orbitopathy, or the bulging of the eyes, affects a significant portion of the population. This project will leverage the accessibility of mobile devices and advanced 3D face-capture sensors like FaceID technology to effortlessly measure eyeball protrusion and integrate disease monitoring into patients’ daily routines. Central to this endeavor is the development of a trustworthy AI system capable of adapting high-resolution medical data to consumer-grade mobile scans while ensuring the accuracy and reliability of the measurements. This AI will also focus on detecting subtle, clinically relevant changes in orbitopathy, overcoming the challenge of data variability and ensuring needs are met across diverse patient demographics. This technology has the potential to pave the way for personalized and equitable health care solutions.

 

To learn more, visit the ASSET Center.