Learning Task-Oriented Grasp Heuristics from Demonstration

TitleLearning Task-Oriented Grasp Heuristics from Demonstration
Publication TypeThesis
Year of Publication2016
AuthorsGutierrez, R. Alejandro
Academic DepartmentDepartment of Electrical Engineering and Computer Science
DegreeM. Eng.
AbstractWhen people plan their motions for dexterous work, they implicitly consider the next likely step in the action sequence. Almost without conscious thought, we select a grasp that meets the implicit constraints related to the task to be performed. A robot tasked with dexterous manipulation should likewise aim to grasp the intended object in a way that makes the next step straightforward. In some cases, lack of consideration of these implicit constraints can result in situations in which the object cannot be manipulated in the desired manner. While recent work has begun to address task dependent constraints, they require direct specification of task constraints or rely on grasp datasets with manually defined task labels. In this thesis, we present a framework that leverages human demonstration to learn task-oriented grasp heuristics for a set of known objects in an unsupervised manner and defined a procedure to instantiate grasps from these learned models. Equating distinct motion profiles with the execution of distinct tasks, our approach leverages the motion during human demonstration in order to partition the accompanying grasp examples into tasks in an unsupervised manner through the incorporation of unsupervised motion clustering algorithms into a grasp learning pipeline. In order to evaluate the framework, a set of human demonstrations of real world manipulation tasks were collected. The framework with unsupervised task clustering produced comparable results to the semi-supervised condition. This translated to the discovery of the correct relationship between the tasks and objects, with the distributions of the resultant grasp point models following intuitive heuristic rules (e.g. handle grasps for tools). The grasps instantiated from these grasp models followed the learned heuristics, but had some limitations due to the choice of grasp model and the instantiation method utilized. Overall, this work demonstrates that the inclusion of unsupervised motion clustering techniques into a grasp learning pipeline can assist in the production of task-oriented models without the typical overhead of direct task constraint encoding or hand labeling of datasets.
URLhttp://hdl.handle.net/1721.1/113153