Adaptive Appearance Model

This page provides the videos for the experimental evaluation of trackers performing on-line appearance model adaptation presented in [1].

Overview

Long–term video tracking is of great importance for many applications in real-world scenarios. A key component for achieving long–term tracking is the tracker’s capability of updating its internal representation of targets (the appearance model) to changing conditions. Given the rapid but fragmented development of this research area, we propose a unified conceptual framework for appearance model adaptation that enables a principled comparison of different approaches. Moreover, we introduce a novel evaluation methodology that enables simultaneous analysis of tracking accuracy and tracking success, without the need of setting application-dependent thresholds. Based on the proposed framework and this novel evaluation methodology, we conduct an extensive experimental comparison of trackers that perform appearance model adaptation. Theoretical and experimental analyses allow us to identify the most effective approaches as well as to highlight design choices that favor resilience to errors during the update process.

Results

Trackers included in the evaluation:

  • Boost [2]
  • SemiBoost [3]
  • BeyondSemiBoost [4]
  • Mean-Shift [5]
  • Color-based particle filter [6]
  • FragTrack [7]
  • Adaptive Basin Hopping Monte Carlo [8]
  • IVT [9]
  • MILBoost [10]
  • TLD [11]
  • STRUCK [12]

 

Rare appearance changes

Dollar

Boost
SemiBoost
BeyondSemiBoost
Mean-Shift
Color-based PF
FragTrack
A-BHMC
IVT
MILBoost
TLD
STRUCK

 

OneStopMoveNoEnter1cor

Boost
SemiBoost
BeyondSemiBoost
Mean-Shift
Color-based PF
FragTrack
A-BHMC
IVT
MILBoost
TLD
STRUCK

 

Continuous appearance changes

Coke11

Boost
SemiBoost
BeyondSemiBoost
Mean-Shift
Color-based PF
FragTrack
A-BHMC
IVT
MILBoost
TLD
STRUCK

Person

Boost
SemiBoost
BeyondSemiBoost
Mean-Shift
Color-based PF
FragTrack
A-BHMC
IVT
MILBoost
TLD
STRUCK

 

Partial occlusions

Faceocc2

Boost
SemiBoost
BeyondSemiBoost
Mean-Shift
Color-based PF
FragTrack
A-BHMC
IVT
MILBoost
TLD
STRUCK

Box450

Boost
SemiBoost
BeyondSemiBoost
Mean-Shift
Color-based PF
FragTrack
A-BHMC
IVT
MILBoost
TLD
STRUCK

 

Short–term total occlusions

Box

Boost
SemiBoost
BeyondSemiBoost
Mean-Shift
Color-based PF
FragTrack
A-BHMC
IVT
MILBoost
TLD
STRUCK

Lemming

Boost
SemiBoost
BeyondSemiBoost
Mean-Shift
Color-based PF
FragTrack
A-BHMC
IVT
MILBoost
TLD
STRUCK

ThreePastShop2cor

Boost
SemiBoost
BeyondSemiBoost
Mean-Shift
Color-based PF
FragTrack
A-BHMC
IVT
MILBoost
TLD
STRUCK

 

Initialization errors

Board

Boost
SemiBoost
BeyondSemiBoost
Mean-Shift
Color-based PF
FragTrack
A-BHMC
IVT
MILBoost
TLD
STRUCK

 

Wide angle views

OneStopEnter2front

Boost
SemiBoost
BeyondSemiBoost
Mean-Shift
Color-based PF
FragTrack
A-BHMC
IVT
MILBoost
TLD
STRUCK

ISSIA_6

Boost
SemiBoost
BeyondSemiBoost
Mean-Shift
Color-based PF
FragTrack
A-BHMC
IVT
MILBoost
TLD
STRUCK

References

  1. S. Salti, A. Cavallaro, L. Di Stefano, "Adaptive Appearance Modeling for Video Tracking: Survey and Evaluation”, IEEE Trans. On Image Processing, 2012.
  2. H. Grabner and H. Bischof, “On-line boosting and vision,” in Proc. of the International Conference on Computer Vision and Pattern Recognition (CVPR) - Volume 1, New York, NY, 2006, pp. 260–267.
  3. H. Grabner, C. Leistner, and H. Bischof, “Semi-supervised on-line boosting for robust tracking,” in Proc. of the European Conference on Computer Vision (ECCV) - Part I, Marseille, France, 2008, pp. 234–247.
  4. S. Stalder, H. Grabner, and L. van Gool, “Beyond semi-supervised tracking: Tracking should be as simple as detection, but not simpler than recognition,” in Proc. of the International Conference on Computer Vision (ICCV) - Workshop on On-line Learning for Computer Vision, Kyoto, Japan, 2009, pp. 1409–1416.
  5. D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-based object tracking,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 25, no. 5, pp. 564–575, 2003.
  6. P. Pérez, C. Hue, J. Vermaak, and M. Gangnet, “Color-based probabilistic tracking,” in Proc. of the European Conference on Computer Vision (ECCV) - Part I, Copenhagen, Denmark, 2002, pp. 661–675.
  7.  A. Adam, E. Rivlin, and I. Shimshoni, “Robust Fragments-based Tracking Using the Integral Histogram,” in Proc. of the International Conference on Computer Vision and Pattern Recognition (CVPR) - Volume 1, New York, NY, 2006, pp. 798–805.
  8.  J. Kwon and K. M. Lee, “Tracking of a Non-rigid Object via Patch-based Dynamic Appearance Modeling and Adaptive Basin Hopping Monte Carlo Sampling,” in Proc. of the International Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, 2009, pp. 1208–1215.
  9. D. A. Ross, J. Lim, R.-S. Lin, and M.-H. Yang, “Incremental learning for robust visual tracking,” International Journal of Computer Vision, vol. 77, no. 1-3, pp. 125–141, 2008.
  10.  B. Babenko, M.-H. Yang, and S. Belongie, “Robust object tracking with online multiple instance learning,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 33, no. 8, pp. 1619–1632, Aug. 2011.
  11. Z. Kalal, J. Matas, and K. Mikolajczyk, “Online Learning of Robust Object Detectors During Unstable Tracking,” in Proc. of the International Conference on Computer Vision (ICCV) - Workshop on On-line Learning for Computer Vision, Kyoto, Japan, 2009.
  12. S. Hare, A. Saffari, and P. Torr, “STRUCK: Structured Output Tracking with Kernels,” in Proc. of the International Conference on Computer Vision (ICCV), Barcelona, Spain, 2011, pp. 263–270.