“What makes an AI system trustworthy depends on the person who needs to trust it,” says Rajeev Alur, director of the ASSET (AI-enabled Systems: Safety, Explainablity and Trustworthiness) Center, part of Penn Engineering’s Innovation in Data Engineering and Science (IDEAS) Initiative.
Since the concept first appeared in science fiction, artificial intelligence has long promised to revolutionize nearly every facet of society. Machines are already making faster decisions and providing more accurate insights than their human counterparts in a host of industries, and as the capabilities of these AI systems grow, so will the stakes of the decisions they’re entrusted with.
Before people regularly ride in autonomous cars or receive diagnoses from medical chatbots, however, they will need to trust these AI systems as much as they do human drivers or doctors. Achieving that trust is a more complicated prospect than increasing their raw computational power or training them to incorporate new kinds of data; it requires insights from multiple branches of computer science, as well as the input of the people whose trust must be earned.
Penn Engineering’s newly formed ASSET (AI-enabled Systems: Safety, Explainablity and Trustworthiness) Center, part of Penn Engineering’s Innovation in Data Engineering and Science (IDEAS) Initiative, aims to weave those threads together with large-scale research projects.
The Center will act as a catalyst for new collaborations among groups researching machine learning, programming languages, natural language processing, robotics, and human-computer interaction within Penn Engineering. It will also connect them with researchers throughout the University, starting with a series of workshops held at Penn’s Perelman School of Medicine that explore the opportunities and challenges for AI systems in healthcare settings.
Rajeev Alur, Zisman Family Professor in the Department of Computer and Information Science, will serve as ASSET’s inaugural director.
“The key to realize the full potential of AI-based decision making is to make more reliable and transparent,” Alur says. “The center will focus on science and tools for developing AI-based systems so that designers can guarantee their reliable operation and users can trust them to meet their expectations.”
As AI systems become incorporated into more facets of society, the more consequential the decisions they make will be. Building safety into AI systems means ensuring they make correct decisions, such as a self-driving car stopping at a red light, but also that they respond appropriately when they don’t have enough information, recognize when they’ve made a mistake, and deal with any number of hard-to-predict or ambiguous situations.
Being able to explain how an AI system makes these decisions is key toward improving them. Decision-making algorithms are useful because they can pick up on connections and model interactions that are too complex for humans to do on their own, but the internal logic that these systems use to arrive at their decisions is opaque since it is learnt from data used from training rather than from code written by programmers. The ASSET Center aims to develop techniques to reveal this hidden reasoning. This sort of insight into an AI-based system’s decision-making process would allow programmers understand how the input information is causally affecting the decisions reached.
The fundamentals of safety and explainability can be explored by zooming into the details of the system’s components, but trustworthiness requires a broader view.
“What makes an AI system trustworthy depends on the person who needs to trust it,” Alur says. “In applications of AI to medicine and healthcare, a patient might care more about safety, since they are asked to trust a device that’s delivering medications based on autonomous decisions, while a clinician might care more about explainability, since they need to understand the reasons behind AI-based diagnosis”.
Beyond hiring faculty who have expertise in this emerging area, ASSET will host interdisciplinary workshops that connect researchers already working on facets of these problems and provide seed funding for projects that emerge from their collaborations.
The first area of focus for these collaborations is the application of AI to medicine. Gathering researchers from Penn Engineering, Penn Medicine and the Penn Institute for Biomedical Informatics, the initial slate of workshops include “Patient-facing trustworthy AI: Is it a pipe-dream?,” “Clinician-centric AI: Will caregivers ever trust recommendations?,” and “Perpetuation of health disparities through biased AI: challenges and solutions.”
“The true potential of AI can only be realized if its users can trust the underlying technology,” says Vijay Kumar, Nemirovsky Family Dean of Penn Engineering. “With its partnership with clinicians, health care experts, and social science researchers, Penn Engineering is uniquely positioned to play a leadership role in this field.”