Overcoming Blind Spots in the Real World: Leveraging Complementary Abilities for Joint Execution

TitleOvercoming Blind Spots in the Real World: Leveraging Complementary Abilities for Joint Execution
Publication TypeConference Proceedings
Year of Conference2019
AuthorsRamakrishnan, R., E. Kamar, B. Nushi, D. Dey, J. Shah, and E. Horvitz
Conference NameAssociation for the Advancement of Artificial Intelligence
Date Published01/2019
AbstractSimulators are being increasingly used to train agents before deploying them in real-world environments. While training in simulation provides a cost-effective way to learn, poorly modeled aspects of the simulator can lead to costly mistakes, or blind spots. While humans can help guide an agent towards identifying these error regions, humans themselves have blind spots and noise in execution. We study how learning about blind spots of both can be used to manage hand-off decisions when humans and agents jointly act in the real-world in which neither of them are trained or evaluated fully. The formulation assumes that agent blind spots result from representational limitations in the simulation world, which leads the agent to ignore important features that are relevant for acting in the open world. Our approach for blind spot discovery combines experiences collected in simulation with limited human demonstrations. The first step applies imitation learning to demonstration data to identify important features that the human is using but that the agent is missing. The second step uses noisy labels extracted from action mismatches between the agent and the human across simulation and demonstration data to train blind spot models. We show through experiments on two domains that our approach is able to learn a succinct representation that accurately captures blind spot regions and avoids dangerous errors in the real world through transfer of control between the agent and the human.