A Unified Mixture Framework for Motion Segmentation: Incorporating Spatial Coherence and Estimating the Number
of Models
Yair Weiss and Edward H. Adelson
Published in
Proceedings of IEEE conference on Computer Vision and Pattern Recognition,1996.
Describing a video sequence in terms of a small number of coherently moving segments is useful for tasks ranging from
video compression to event perception. A promising approach is to view the motion segmentation problem in a mixture
estimation framework. However, existing formulations generally use only the motion data and thus fail to make use of
static cues when segmenting the sequence. Furthermore, the number of models is either specified in advance or
estimated outside the mixture model framework. In this work we address both of these issues. We show how to present a
variant of the EM algorithm that makes use of both the form and the motion constraints. Moreover, this algorithm
estimates the number of segments given knowledge about the level of model failure expected in the sequence. The
algorithm's performance is illustrated on synthetic and real image sequences.