I am an assistant professor in computer science and engineering at the Paul G. Allen School at the University of Washington. I lead the Washington Embodied Intelligence and Robotics Development (WEIRD) lab.
Previously, I was a post-doctoral scholar at MIT, collaborating with Russ Tedrake and Pulkit Agarwal.
I spent 6 wonderful years completing my PhD in machine learning and robotics at BAIR at UC Berkeley, where I was advised by Professor Sergey Levine and Professor Pieter Abbeel. Previously, I completed my bachelors degree also at UC Berkeley.
My main research goal is to develop algorithms which enable robotic systems to learn how to perform complex tasks in a variety of unstructured environments like offices and homes. To that end, I work towards building deep reinforcement learning algorithms that can learn in the real world, with and around humans. Recently our work has focused on deployment time reinforcement learning, learning on deployment directly in human-centric environments under the following themes
- Learning foundation models from off-domain sources of data such as video, simulation or generative models
- Fast and efficient real world adaptation using pre-trained priors
- Human in the loop interaction and adaptation
- Real-to-sim-to-real policy learning methods
More generally, I have been interested in the problems of building scalable foundation models from off-domain data, fast and safe adaptation with RL, human in the loop reinforcement learning, reward specification, continual real world data collection and learning, offline reinforcement learning for robotics, multi-task and meta-learning and dexterous manipulation with robotic hands and studying generalization and extrapolation for policies and models. I am also excited about a broader space of problems including algorithms for assistive robotics, safe exploration, robustness and compositionality in deep learning, and all things embodied intelligence.