Autonomously Generating Annotated Training Data for 6D Object Pose Estimation

Keywords
Loading...
Thumbnail Image
Issue Date
2020-10-26
Language
en
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Recently developed deep neural networks achieved state-of-the-art results in the subject of 6D object pose estimation for robot manipulation. However, those supervised deep learning methods require expensive annotated training data. Current methods for reducing those costs frequently use synthetic data from simulations, but they rely on expert knowledge and suffer from the reality gap when shifting to the real world. The present research project is a proof of concept for autonomously generating annotated training data for 6D object pose estimation, which addresses both the subject of cost reduction and the applicability to the real world task at hand. A training data set is generated for a given grasping task with an industrial robot arm at its indented workstation. Additionally, the data is autonomously annotated via background subtraction and point cloud stitching. A state-of-the-art neural network for 6D object pose estimation is trained with the generated data set and a grasping experiment is conducted in order to evaluate the quality of the pose estimation. The proposed concept outperforms related work with respect to the grasping success rate, while circumventing the reality gap, removing annotation cost, and avoiding expert labor. Thereby, it is demonstrated that the data required to train a 6D object pose estimation neural network can be generated and autonomously annotated with sufficient quality within the workstation of an industrial robot arm for its intended task.
Description
Citation
Faculty
Faculteit der Sociale Wetenschappen