Interactive and Interpretable Machine Learning Models for Human Machine Collaboration

TitleInteractive and Interpretable Machine Learning Models for Human Machine Collaboration
Publication TypeThesis
Year of Publication2015
AuthorsKim, B.
Academic DepartmentDepartment of Aeronautics and Astronautics
DegreePh. D.
AbstractI envision a system that enables successful collaborations between humans and machine learning models by harnessing the relative strength to accomplish what neither can do alone. Machine learning techniques and humans have skills that complement each other - machine learning techniques are good at computation on data at the lowest level of granularity, whereas people are better at abstracting knowledge from their experience, and transferring the knowledge across domains. The goal of this thesis is to develop a framework for human-in-the-loop machine learning that enables people to interact effectively with machine learning models to make better decisions, without requiring in-depth knowledge about machine learning techniques. Many of us interact with machine learning systems everyday. Systems that mine data for product recommendations, for example, are ubiquitous. However these systems compute their output without end-user involvement, and there are typically no life or death consequences in the case the machine learning result is not acceptable to the user. In contrast, domains where decisions can have serious consequences (e.g., emergency response panning, medical decision-making), require the incorporation of human experts' domain knowledge. These systems also must be transparent to earn experts' trust and be adopted in their workflow. The challenge addressed in this thesis is that traditional machine learning systems are not designed to extract domain experts' knowledge from natural workflow, or to provide pathways for the human domain expert to directly interact with the algorithm to interject their knowledge or to better understand the system output. For machine learning systems to make a real-world impact in these important domains, these systems must be able to communicate with highly skilled human experts to leverage their judgment and expertise, and share useful information or patterns from the data. In this thesis, I bridge this gap by building human-in-the-loop machine learning models and systems that compute and communicate machine learning results in ways that are compatible with the human decision-making process, and that can readily incorporate human experts' domain knowledge. I start by building a machine learning model that infers human teams' planning decisions from the structured form of natural language of team meetings. I show that the model can infer a human teams' final plan with 86% accuracy on average. I then design an interpretable machine learning model then "makes sense to humans" by exploring and communicating patterns and structure in data to support human decision-making. Through human subject experiments, I show that this interpretable machine learning model offers statistically significant quantitative improvements in interpretability while preserving clustering performance. Finally, I design a machine learning model that supports transparent interaction with humans without requiring that a user has expert knowledge of machine learning technique. I build a human-in-the-loop machine learning system that incorporates human feedback and communicates its internal states to humans, using an intuitive medium for interaction with the machine learning model. I demonstrate the application of this model for an educational domain in which teachers cluster programming assignments to streamline the grading process.
URLhttp://hdl.handle.net/1721.1/98680