Algorithmic Reasoning in Large Language Models

Faculty: Surbhi Goel

Opportunity: Design new optimization algorithms and architecture fixes to enable current large language models (LLMs) to solve logical “reasoning” tasks robustly and not “hallucinate”

 

Challenge: How do we open the gigantic black-box of LLMs to understand how they reason and precisely quantify their failure modes? How do we certify that they are indeed robust to all out of distribution interactions?