One-Shot Imitation Learning with Invariance Matching for Robotic Manipulation

Rutgers University
RSS 2024
*: Supplementary includes a failure case study.

We propose Invariance-Matching One-shot Policy Learning (IMOP) algorithm, which learns a new manipulation task from a single unannotated demonstration through matching key visual elements between demonstration and test scenes.

Abstract

Learning a single universal policy that can perform a diverse set of manipulation tasks is a promising new direction in robotics. However, existing techniques are limited to learning policies that can only perform tasks that are encountered during training, and require a large number of demonstrations to learn new tasks. Humans, on the other hand, often can learn a new task from a single unannotated demonstration. In this work, we propose the Invariance-Matching One-shot Policy Learning (IMOP) algorithm. In contrast to the standard practice of learning the end-effector's pose directly, IMOP first learns invariant regions of the state space for a given task, and then computes the end-effector's pose through matching the invariant regions between demonstrations and test scenes. Trained on the 18 RLBench tasks, IMOP achieves a success rate that outperforms the state-of-the-art consistently, by 4.5% on average over the 18 tasks. More importantly, IMOP can learn a novel task from a single unannotated demonstration, and without any fine-tuning, and achieves an average success rate improvement of 11.5% over the state-of-the-art on 22 novel tasks selected across nine categories. IMOP can also generalize to new shapes and learn to manipulate objects that are different from those in the demonstration. Further, IMOP can perform one-shot sim-to-real transfer using a single real-robot demonstration.

State of the Art Performance on RLBench

Using real-robot trajectories as the one-shot demonstrations of novel tasks, IMOP is able to perform one-shot sim2real transfer in one-shot without re-training.