Learning Visual Groups From Co-occurrences in Space and Time.

Isola, P., Zoran, D., Krishnan, K., Adelson, E.H.


Abstract

We propose a self-supervised framework that learns to group visual entities based on their rate of co-occurrence in space and time. To model statistical dependencies between the entities, we set up a simple binary classification problem in which the goal is to predict if two visual primitives occur in the same spatial or temporal context. We apply this framework to three domains: learning patch affinities from spatial adjacency in images, learning frame affinities from temporal adjacency in videos, and learning photo affinities from geospatial proximity in image collections. We demonstrate that in each case the learned affinities uncover meaningful semantic groupings. From patch affinities we generate object proposals that are competitive with state-of-the-art supervised methods. From frame affinities we generate movie scene segmentations that correlate well with DVD chapter structure. Finally, from geospatial affinities we learn groups that relate well to semantic place categories.

Information

title:
Learning Visual Groups From Co-occurrences in Space and Time.
author:
Isola,
P.,
Zoran,
D.,
Krishnan,
K.,
Adelson,
E.H.
citation:
International Conference on Learning Representations, Workshop paper, 2016
shortcite:
International Conference on Learning Representations, Workshop paper, 2016
year:
2016
created:
2016-01-01
keyword:
adelson
www:
http://persci.mit.edu/pub_abstracts/learning-visual-groups.html
pdf:
http://persci.mit.edu/pub_pdfs/learning-visual-groups.pdf
pageid:
learning-visual-groups
type:
publication
 
pub_abstracts/learning-visual-groups.html.txt · Last modified: 2016/07/13 17:47 by elmer
Accessibility