Mark Yatskar (University of Pennsylvania): “Inherent Interpretability via Language Model Guided Bottleneck Design”
Levine 307Presentation Abstract: As deep learning systems improve, their applicability to critical domains is hampered because of a lack of transparency. Post-hoc explanations attempt to address this concern but they provide […]