Dense Unsupervised Learning for Video Segmentation
Loading...
Date
2023-08-04
Type
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Description
We present a novel approach to unsupervised learning for video object segmentation (VOS). Unlike previous work, our formulation allows to learn dense feature representations directly in a fully convolutional regime. We rely on uniform grid sampling to extract a set of anchors and train our model to disambiguate between them on both inter- and intra-video levels. However, a naive scheme to train such a model results in a degenerate solution. We propose to prevent this with a simple regularisation scheme, accommodating the equivariance property of the segmentation task to similarity transformations. Our training objective admits efficient implementation and exhibits fast training convergence. On established VOS benchmarks, our approach exceeds the segmentation accuracy of previous work despite using significantly less training data and compute power.
Citation
Endorsement
Related Resources
Is Described By
https://arxiv.org/abs/2111.06265Faculty
Collections
License
Except where otherwise noted, this license is described as Apache License 2.0
Version History
You are currently viewing version no. 2 of the item. This is the most recent version.