My goal is to ensure AI is beneficial to society. To this end, I am researching how neural networks internally work, including:
I am doing this work at FAR AI. If you are interested in it, email me, and consider joining us! Also join us if you want to run your independent AI safety agenda.
Previously I worked at Redwood Research on interpretability research and software development.
I hold a PhD in machine learning, which was advised by Prof. Carl Rasmussen at the University of Cambridge. My research focused on improving uncertainty quantification in neural networks (NNs) using Bayesian principles.