I am a graduate student in the Department of Aeronautics and Astronautics with the Interactive Robotics Group (IRG). Within the group, I am focused on explainable robotics – how can complex computer and robotic systems explain their policies and state to humans who have no special training. Just as most people can look at a bicycle and spot flat tires or severed brake cables, could we create robots that are rapidly diagnosable, at least at a high level?
My interest in this question is strongly rooted in my previous experience. After studying Computer Science and Aeronautics and Astronautics at MIT for my undergraduate, and with a MEng in Computer Science focusing on AI, I worked for two years at Amazon Robotics. During my work, I grew to appreciate the massive scale of industrial robotics, but I also became frustrated by the lack of scalability for fixing robots. The actual operators and associates working alongside robots had no insight into how the lumbering machines worked. I would like to change that.
Outside of research, I enjoy rowing, piano, cooking, and reading.