
- This event has passed.
Dinesh Jayaraman (University of Pennsylvania): Engineering Better Robot Learners: Exploration and Exploitation
Abstract:
Industry is placing big bets on “brute forcing” robotic control, but such approaches ignore the centrality of resource constraints in robotics on power, compute, time, data, etc. Towards realizing a true engineering discipline of robotics, my research group has been “exploiting and exploring” robot learning: exploiting to push the limits of what can be achieved with today’s prevalent approaches, and “exploring” better design principles for masterful and minimalist robots in the future. As examples of “exploit”, we have trained quadruped robots to perform circus tricks on yoga balls and robot arms to perform household tasks in entirely unseen scenes with unseen objects. As examples of “explore”, we are studying the sensory requirements of robot learners: what sensors do they need and when do they need them during training and task execution? In this talk, I will highlight these examples and discuss some lessons we have learned in our research towards better-engineered robot learners.
Biography:
Dinesh Jayaraman was born in Chennai, India in 1989. He received the B.Tech degree in Electrical Engineering at the Indian Institute of Technology, Madras in 2011 and the PhD degree in Electrical and Computer Engineerin from the University of Texas at Austin in 2017. After a postdoc at the University of California, Berkeley, he joined the University of Pennsylvania in 2020, where he now serves as an Assistant Professor in the Department of Computer and Information Science. He is a core member of the GRASP laboratory at Penn, where he leads the Penn Perception, Action, and Learning (Penn PAL) research group. During Fall 2019, he was at Meta (then Facebook) as a visiting researcher.
Dinesh’s research group has worked on various topics in robot learning, reinforcement learning, and computer vision. This includes early work on self-supervised visual representation learning from ego-motion and temporal continuity, active visual recognition and reconstruction, video prediction, model-based planning and model-based reinforcement learning, sensing touch and contact, entity-centric visual representations, and the development and use of foundation models for robotics.
Dinesh’s research is funded by U.S.~government funding agencies such as the National Science Foundation (NSF), the Office of Naval Research (ONR), and Defense Advanced Research Projects Agency (DARPA). Their research has received a Best Paper Award at CORL ’22, a Best Paper Runner-Up Award at ICRA ’18, a Best Application Paper Award at ACCV ’16, the NSF CAREER award ’23, an Amazon Research Award ’21, and been covered in The Economist, TechCrunch, and several other press outlets.
Zoom: https://upenn.zoom.us/j/98963621993