|When humans work together to form a plan, they often make mistakes: they misspeak, say things out of order, and negate things they had previously said. We aim to read human team planning conversations and extract the final agreed-upon plan so that a robotic agent may assist in design or execution. Previous work shows that a generative model with logic-based priors is effective when the plan being formed is relatively simple. We present an algorithm that expands on the model by incorporating dialogue acts, which give an indication of how proposed actions are said. We compare our model's performance to humans on the same task. We also validate the model on a toy problem, achieving the desired output 8 times out of 10 (compared to a baseline of 3/10), and run the baseline and our expanded model on a more complex input dialogue. To the best of our knowledge, this is this first work that incorporates dialogue acts into a generative model to perform plan inference.