Waypoint-Based Imitation Learning for Robotic Manipulation

While imitation learning methods have seen a resurgent interest for robotic manipulation, the well-known problem of
*compounding errors* continues to afflict behavioral cloning (BC). Waypoints can help address this problem by
reducing the horizon of the learning problem for BC, and thus, the errors compounded over time. However, waypoint labeling is
underspecified, and requires additional human supervision. Can we generate waypoints automatically without any
additional human supervision? Our key insight is that if a trajectory segment can be approximated by linear motion, the
endpoints can be used as waypoints. We propose *Automatic Waypoint Extraction* (AWE) for imitation learning, a
preprocessing module to decompose a demonstration into a minimal set of waypoints which when interpolated linearly can
approximate the trajectory up to a specified error threshold. AWE can be combined with any BC algorithm, and we find
that AWE can increase the success rate of state-of-the-art algorithms by up to 25% in simulation and by 4-28% on
real-world bimanual manipulation tasks, reducing the decision making horizon by up to a factor of 10.