News

The ASSET Center: Enabling Trust Between AI and its User

Picture this: you’re getting ready to watch a movie on Netflix, popcorn in hand, and several films pop up that have been curated just for you. What are you going to do: choose one from the list recommended by the underlying AI algorithm, or worry about how this list was generated and whether you should trust it? Now, think about when you are at the doctors’ office and the physician decides to consult an online system to figure out what dosage of medicine you as the patient should take. Would you feel comfortable having a course of treatment chosen for you by artificial intelligence? What will the future of medicine look like where the doctor is not involved at all in making the decision?

In the Spotlight: Eric Wong and Developing Debuggable AI-Systems

What happens when AI goes wrong? Probably not the Terminator or the Matrix – despite what Hollywood suggests – but rather, something that could still harm a human, such as a self-driving car that gets into an accident, or an algorithm that discriminates against certain people. Fortunately Penn has innovative researchers like Eric Wong, who build tools to make sure AI works correctly!