|Improving Team's Consistency of Understanding in Meetings: Intelligent Agent Participation and Human Subject Studies
|Year of Publication
|Aeronautics and Astronautics
|Number of Pages
|Massachusetts Institute of Technology
Upon concluding a meeting, participants can occasionally leave with different understandings of what had been discussed. For meetings that result in immediate subsequent action, such as emergency response planning, all participants must share a common understanding of the decisions reached by the team in order to ensure successful execution of their mission. Thus, detecting inconsistencies in understanding among meeting participants is a desired capability for an intelligent system designed to monitor meetings and provide feedback to spur stronger shared understanding within a group.
In this thesis, we present a computational model for the automatic prediction of consistency among team members' understanding of their group's decisions. The model utilizes dialogue features focused on capturing the dynamics of group decision-making. We trained our model using one of the largest publicly available meeting datasets and achieved a prediction accuracy rate of 64.2%, as well as robustness across different meeting phases. To the best of our knowledge, our work is the first to automatically predict levels of shared understanding using natural dialogue.
We then implemented our model in an intelligent system that participated in human team planning meetings about a hypothetical emergency response mission. The system suggested discussion topics that the team would derive the most benefit from reviewing with one another. Through human subject experiments with 30 participants, we evaluated the utility of such a feedback system, and observed a statistically significant mean increase of 17.5% in objective measures of the consistency of the teams' understanding compared with that obtained using a baseline interactive system.