Spring 2026
__________________________________
Showing of the video of Terence Tao’s Lecture
“The Potential for AI in Science and Mathematics”
Monday, January 26th, 3:30 – 4:30 pm, KAP 414
_______________________________________________________________________________________________
Harold Williams, USC
Monday, February 2nd, 3:30 – 4:30 pm, KAP 414
Title: An Introduction to Automated Theorem Proving in Lean
Abstract: In this talk we will give an introduction to Lean, a programming language adapted to the formal verification of mathematical proofs. Lean and its flagship library, mathlib, have been the focus of a dedicated user community for about a decade, but their visibility has grown significantly in light of recent advances in AI.
Lean-based automated theorem provers are now improving at a steady pace, and we will compare and contrast the training of such systems with the training of large language models such as GPT, Claude, or Gemini. We will also give a brief introduction to Monte Carlo Tree Search, an algorithm whose presence or absence provides one of the main high-level distinctions among different state-of-the-art automated provers. Its use in particular distinguishes Seed-Prover and Aristotle, two frontier models developed respectively by ByteDance and Harmonic, the latter in collaboration with an academic team including myself, Sergei Gukov, Dan Halpern-Leistner, and Alex Meiburg.
_______________________________________________________________________________________________
Sergei Gukov, Caltech
Monday, February 9th, 3:30 – 4:30 pm, KAP 414
Title: The role of AI in mathematical (re)search
Abstract: At its core, scientific research is a search, a search for new ideas, new patterns, and new ways to explain or prove things. In this talk, I invite you to explore how AI is reshaping different stages of this process. We will see that while AI excels at many tasks, it still hesitates on others, such as long-horizon reasoning or far-out-of-distribution generalization. I view this as good news: it highlights how much meaningful AI research remains to be done. In fact, the goal of expanding AI’s role in mathematical research has become a motivation for advancing AI itself. I am genuinely excited that these two fields have come into such close contact over the past few years.
_______________________________________________________________________________________________
Gunnar Carlsson, Stanford University and BluelightAI Inc.
Monday, February 23rd, 3:30 – 4:30 pm, KAP 414
Title: Topology, Data Science, and Deep Learning
Abstract: Approximating data sets by graphs and simplicial complexes has been shown to be a very useful way to obtain qualitative information about data, and more recently has been shown to similarly contribute to artificial intelligence. I will discuss the mathematics around this, with examples from various domains.
_______________________________________________________________________________________________
Javier Gomez Serrano, Brown University
Monday, March 2nd, 3:30 – 4:30 pm, KAP 414
Title: Mathematical Exploration and Discovery at Scale
Abstract: Machine learning is transforming mathematical discovery, enabling advances on longstanding open problems. In this talk, I will discuss AlphaEvolve, a general-purpose evolutionary coding agent that uses large language models to autonomously discover old and new mathematical constructions and potentially go beyond them. AlphaEvolve tackles a wide variety of problems across analysis, geometry, combinatorics, and number theory. In some instances, AlphaEvolve is also able to generalize results for a finite number of input values into a formula valid for all input values. Furthermore, we are able to combine this methodology with Deep Think and AlphaProof in a broader framework where the additional proof-assistants and reasoning systems provide automated proof generation and further mathematical insights. This illustrates how general-purpose AI systems can systematically successfully explore broad mathematical landscapes at an unprecedented speed, leading us to do mathematics at scale.
_______________________________________________________________________________________________
Shanghua Teng, USC
Monday, March 9th, 3:30 – 4:30 pm, KAP 414
Title: Understanding and Characterizing Regularization: Learnability and Physics-Based Energy Guidance
Abstract: The quintessential learning algorithm of empirical risk minimization (ERM) is known to fail in various settings for which uniform convergence does not characterize learning. Relatedly, the practice of machine learning is rife with considerably richer algorithmic techniques, perhaps the most notable of which is regularization. Nevertheless, no such technique or principle has broken away from the pack to characterize optimal learning in these more general settings. The purpose of this research direction is to understand the role of regularization in data-driven machine learning.
First, we focus on an abstract statistical learning framework, present our work on characterizing the power of regularization in perhaps the simplest setting for which ERM fails: multiclass learning with arbitrary label sets. Using one-inclusion graphs (OIGs), we exhibit a local-regularization approach to obtain optimal learning algorithms that dovetail with tried-and-true algorithmic principles: Occam’s Razor as embodied by structural risk minimization (SRM), the principle of maximum entropy, and Bayesian inference.
Second, we share some of our on-going progress on designing physics-guided energy regularization for data-driven learning of the weak solution to parabolic PDEs. The goal here is to extend semi-supervised learning by exploiting auxiliary data and the underlying physical model to construct stronger regularization, enabling more efficient learning with optimal estimation and faster generalization.
Joint work (COLT 2024) with Julian Asilis, Siddartha Devic, Shaddin Dughmi, and Vatsal Sharan
Joint work with Xiaohui Chen and Zixiang Zhou.
_______________________________________________________________________________________________