In this talk I’ll present a a new approach for highly accurate bottom-up object segmentation. Given an image, the approach rapidly generates a set of regions that delineate candidate objects in the image. The key idea is to train an ensemble of figure-ground segmentation models directly from a large dataset of annotated object segmentations. Extensive experiments demonstrate that the presented approach outperforms prior object proposal algorithms by a significant margin, while having the lowest running time. The method generalizes well across datasets, indicating that the presented approach is capable of learning a generally applicable model of bottom-up segmentation.