(Due Sept 15, 2022)

Collaborative Research in Trustworthy AI for Medicine

School of Engineering and Applied Science and Perelman School of Medicine

University of Pennsylvania

Context:

One of the most compelling applications of AI-driven systems is in the broad areas of health and medical care: not only can AI-directed systems offer a level of continuous monitoring that exceeds human capabilities, but these systems can incorporate each patient’s unique history, state, and preferences. Decreasing cost of sensors, computing elements, communication, and data storage, coupled with significant advances in machine learning algorithms, are catalysts for this ongoing revolution in medicine. However, trust in AI systems is limited both in the community of medical professionals and in the patient population. As such, making AI in medicine trustworthy is, arguably, the central challenge to enable wider adoption.

Existing research, including a number of current projects at Penn, broadly falls in two categories. Researchers in computer science and engineering, usually funded by NSF, are focused on advancing the core AI technologies, and their lack of understanding of domain knowledge can lead to solutions that are not clinically meaningful or useful. Researchers in medicine, on the other hand, usually funded by NIH, are focused on innovative applications of AI to clinical research, and may run into fundamental obstacles or develop point solutions that are not generalizable to other problems. We believe that truly transformative research in AI-driven medicine requires both a focus on clinically relevant problems and foundational advances leading to generalizable solutions. Research strengths in both the School of Engineering and Applied Science (SEAS) and the Perelman School of Medicine (PSOM), and their geographical proximity, gives Penn a unique opportunity to overcome the hurdles for transformative collaborative research across disciplines and work cultures. This funding program is aimed at jump-starting impactful collaborative research at Penn to advance trustworthy AI for medicine, and is sponsored by SEAS Center ASSET (AI-Enabled Systems: Safe, Explainable, and Trustworthy) and Penn Institute for Biomedical Informatics (IBI).

Proposal Requirements:

Proposals are welcome from all members of Penn’s faculty. Each proposal should be structured as follows:

  1. Information about investigators (up to 1 page): A proposal must have at least two PIs, one from SEAS and one from PSOM, and ideally propose a new collaboration. For each PI, include contact information and a brief biosketch. No faculty should be listed as a PI for more than two proposals.
  2. Proposed research (up to 2 pages): The proposal should describe a clinically relevant problem, shortcomings of the existing approaches, and how AI can potentially provide a transformative solution. It should describe what foundational advance in trustworthy AI will be pursued and how that relates to the original clinical problem.
  3. Budget justification (up to 1 page): Each selected proposal will be awarded $100,000 for a duration of one year. The proposal should describe how the funds will be used. Note that funds cannot be used for faculty salary.