Xun Gao
CU Boulder
Abstract: Generative models in machine learning have gained a lot of attention for their practical applications. In this talk, we explore how quantum correlations can enhance these models and investigate the potential quantum advantage based on the no-go theorem of hidden variable theory. Unlike other quantum machine learning theories that focus on sample complexity directly, our focus is on the expressive power of the learning models. We give an example showing how quantum correlations—specifically, contextuality—can define a quantum neural network that outperforms any reasonable classical neural network in terms of the number of hidden neurons required for a language translation task. This includes a proof for artificially constructed data and numerical results for real-world data. We will also briefly mention a possible mathematical framework that could solidify this claim. This direction is still in its early stages, and I hope this talk will inspire others to make more connections from foundational research in quantum information to practically useful problems in machine learning.
Xun Gao is an assistant professor at University of Colorado Boulder and an associate fellow at JILA. He got his PhD from Tsinghua University. Then he was a postdoc at Harvard University. His research interests are quantum computational advantage and quantum machine learning.
All lectures held in CTLM102 unless otherwise specified
Pre-seminar snacks will be offered in CoorsTek 140/150 from 3:30pm-4:00pm.