NSF Workshop on Science of Safe AI
Glandt Forum, Singh Center for Nanotechnology 3205 Walnut Street, Philadelphia, PennsylvaniaTo learn more about this event, check out our page!
To learn more about this event, check out our page!
Abstract: Extracting insights from imaging data used to be straightforward: every component of imaging systems was engineered by humans, the analysis and interpretation of the collected data was driven by […]
Abstract: Data-driven systems hold immense potential to positively impact society, but their reliability remains a challenge. Their outputs are often too brittle to changes in their training data, leaving them […]
Abstract: Controlling language models is key to unlocking their full potential and making them useful for downstream tasks. Successfully deploying these models often requires both task-specific customization and rigorous auditing […]
Abstract: Machine learning applications are increasingly reliant on black-box pretrained models. To ensure safe use of these models, techniques such as unlearning, guardrails, and watermarking have been proposed to curb […]
Abstract: Large Language Models (LLMs) are vulnerable to adversarial attacks, which bypass common safeguards put in place to prevent these models from generating harmful output. Notably, these attacks can be […]
Abstract: Robust simulation and precise modeling of physical dynamics are essential for advancing perception, planning, and control in the development of generalist physical agents. In this talk, I will present […]
Abstract: American democracy has been undermined by an “infodemic” of fake news, coupled with the widespread segregation of consumers into ideologically homogenous echo chambers by inscrutable algorithms deployed by rapacious […]
Abstract: Neurosymbolic Program Synthesis (NSP) integrates neural networks and symbolic reasoning to tackle complex tasks requiring both perception and logical reasoning. This talk provides an overview of the NSP framework […]
To learn more about this event, click the link!