Can LLMs Achieve Causal Reasoning and Cooperation?
Learning objectives
- Evaluating the emergent causal reasoning capabilities and limitations of modern LLMs across diverse domains.
- Enhancing LLMs causal inference by guiding them through structured, step by step formal methodologies.
- Exploring how individual causality scales to group outcomes and whether LLMs learn cooperation in multi-agent simulations.
Speaker Bio
Dr. Zhijing Jin (she/her) is an Assistant Professor at the University of Toronto and Research Scientist at the Max Planck Institute. She serves as a CIFAR AI Chair, an ELLIS advisor, and a faculty member at the Vector Institute, and the Schwartz Reisman Institute. She co-chairs the ACL Ethics Committee, and the ACL Year-Round Mentorship. Her research focuses on Causal Reasoning with LLMs, and AI Safety in Multi-Agent LLMs. She has published over 80 papers and has received the ELLIS PhD Award, three Rising Star awards, and two Best Paper awards at NeurIPS 2024 Workshops.