Paper by Alan Chan, Rebecca Salganik, Alva Markelius, Chris Pang, Nitarshan Rajkumar, Dmitrii Krasheninnikov, Lauro Langosco, Zhonghao He, Yawen Duan, Micah Carroll, Michelle Lin, Alex Mayhew, Katherine Collins, Maryam Molamohammadi, John Burden, Wanru Zhao, Shalaleh Rismani, Konstantinos Voudouris, Umang Bhatt, Adrian Weller, David Krueger, Tegan Maharaj
David is a member of Cambridge's Computational and Biological Learning lab (CBL), where he leads a research group focused on Deep Learning an AI Alignment.
David has been interested in the potential benefits, social impacts, and catastrophic risks of AI since he learned that AI wasn't pure science fiction at the beginning of his undergraduate studies (in Mathematics, at Reed College) in 2007.
After hearing about Deep Learning in 2012, he realized progress in AI was about to take off, and decided to enter the field, beginning his graduate studies at the University of Montreal in 2013. His research has covered many areas in Deep Learning, including generative modeling, Bayesian Deep Learning, understanding generalization, and robustness.
His current interests are primarily: 1) formalizing and testing AI Alignment concerns and approaches, especially to do with learning reward functions, 2) understanding Deep Learning, and 3) techniques for aligning foundation models.
Related resources
-
Harms from Increasingly Agentic Algorithmic Systems