Research Overview

Our lab realizes robot teammates by developing the following three sequential system capabilities:

  1.  Plan:  A system to participate in early stages of human team planning to infer the agreed upon “idealized” shared plan.
  2.  Refine:  A system to refine task plans through observation and interaction with humans in real contexts.
  3.  Execute:  An online system to rapidly predict the spatio-temporal trajectory of future human actions and react accordingly.

Research Contribution #1: Plan

Human teams often form plans by achieving consensus among members of the group. In contrast, most existing models and algorithms for machine-assisted planning formalize the problem by assuming one human planner and one automated planner. In addition, machine support of team planning requires computationally tractable models to infer shared understanding of commitment.  However, prior models of human team planning are qualitative, or formal but not computationally tractable. Finally, machine participation in real world team planning is not possible when the machine assumes its models are correct and complete.

We have designed computational models of team planning that draw insight from well-established models of human cognition. The model structures are designed to perform real-time inference of the team’s shared understanding of commitment.


Research Contribution #2: Refine

The purpose of team planning is to form an idealized shared plan. But the team rarely follows the plan exactly. People change their collaboration strategies (i.e. rules and heuristics for task allocation, synchronization, timing) based on many factors including past experience, workload or personal preference. And people find it difficult to explicitly communicate how the team adapts their idealized plan to myriad real-world situations.

These challenges motivate a machine learning approach for refining idealized shared plans for real contexts. However there are no prior effective techniques for learning human task allocation and scheduling policies from human demonstration. There is typically little data available to learn from, no environment simulator/expert emulator, and many approaches to learning require regression through a very large state space.

Further, machine learning through remote observation is not sufficient to refine a human-robot team plan. People adapt their strategies through continued interactions with teammates. An effective machine teammate must co-adapt its collaboration strategies to support the human’s learning process.

Our approach to policy learning is made possible with little data and no environment/expert emulator, through insight from models of human cognition. Our models and algorithms for jointly optimizing human-robot team performance are derived based on insights from effective human team training processes, including cross-training and perturbation training.


Research Contribution #3: Execute

Once a team has refined their plan, the members must coordinate while executing the plan. However, it is challenging for a machine to monitor a human’s real-time progress through a plan. Prior activity recognition approaches are designed and tuned for specific motions or tasks. No existing single technique provides accurate predictions over short and long time horizons, in most scenarios. The robot must predict detailed space-time trajectories of human actions for short and long timescales (<1s to 10-20s) to react appropriately. 

Robots also need to make quick adjustments to their plans based on continually updating predictions of human actions.There are no prior task assignment and sequencing algorithms that scale to multi-agent factory size problems and support on-the-fly scheduling in the presence of temporal and spatial proximity constraints.

Our algorithms automatically leverage temporal and spatial models of activities to perform early classification of human activities and quickly update robot task plans.