Picture this: you’re getting ready to watch a movie on Netflix, popcorn in hand, and several films pop up that have been curated just for you. What are you going to do: choose one from the list recommended by the underlying AI algorithm, or worry about how this list was generated and whether you should trust it? Now, think about when you are at the doctors’ office and the physician decides to consult an online system to figure out what dosage of medicine you as the patient should take. Would you feel comfortable having a course of treatment chosen for you by artificial intelligence? What will the future of medicine look like where the doctor is not involved at all in making the decision?
What happens when AI goes wrong? Probably not the Terminator or the Matrix – despite what Hollywood suggests – but rather, something that could still harm a human, such as a self-driving car that gets into an accident, or an algorithm that discriminates against certain people. Fortunately Penn has innovative researchers like Eric Wong, who build tools to make sure AI works correctly!
“What makes an AI system trustworthy depends on the person who needs to trust it,” says Rajeev Alur, director of the ASSET (AI-enabled Systems: Safety, Explainablity and Trustworthiness) Center, part […]
(Due Sept 15, 2022) Collaborative Research in Trustworthy AI for Medicine School of Engineering and Applied Science and Perelman School of Medicine University of Pennsylvania Context: One of the most […]