Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Robust visual tracking using an effective appearance model based on sparse coding
Zhang S., Yao H., Sun X., Liu S. ACM Transactions on Intelligent Systems and Technology3 (3):1-18,2012.Type:Article
Date Reviewed: Mar 25 2013

A visual or image tracking method for videos based on sparse distributed representations described in terms of general basis functions (GBF) extracted from image patches is presented in this paper. The GBFs are extracted as features, using an algorithm based on independent component analysis (ICA) on a large set of natural image patches. The authors then model the appearance of the tracked target as the probability distribution of these features. The proposed method has three steps: (1) GBF extraction, using image patches and ICA; (2) targeted representation using selected features based on entropy, and the computation of the probability distribution of the features; and (3) targeted search to minimize the Matusita distance between the distribution of the target model and the candidate. As this method uses a sparse representation and local features, it is robust to partial occlusion. It is also robust to camouflage environments, pose changes, and illumination changes, as demonstrated by the authors in the experiments (this is adequate for usual video surveillance applications).

The authors compare their approach to the mean shift tracker, the L1 tracker, and the BH tracker, with better results. I would have liked to see them adopt a standard benchmark database. They might also have compared their method with other visual tracking methods such as SIFT or SURF, or even with the improved methods described by Yang et al. [1], whose research was referenced in this paper. I wish the authors had also discussed and evaluated the robustness and invariance of the proposed approach during significant background changes and general scene translations, rotations, and scaling.

Reviewer:  Fernando Osorio Review #: CR141062 (1307-0649)
1) Yang, M.; Yuan, J.; Wu, Y. Spatial selection for attentional visual tracking. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition IEEE, 2007, 1–7.
Bookmark and Share
 
Video Analysis (I.2.10 ... )
 
 
Motion (I.4.8 ... )
 
 
Region Growing, Partitioning (I.4.6 ... )
 
 
Scene Analysis (I.4.8 )
 
 
Segmentation (I.4.6 )
 
Would you recommend this review?
yes
no
Other reviews under "Video Analysis": Date
Background subtraction based on logarithmic intensities
Wu Q., Jeng B. Pattern Recognition Letters 23(13): 1529-1536, 2002. Type: Article
Aug 6 2003
Video segmentation based on 2D image analysis: a coding theoretic approach
Guimarães S., Couprie M., Araújo A., Leite N. Pattern Recognition Letters 24(7): 947-957, 2003. Type: Article
Jun 20 2003
Multimodal video characterization and summarization (Kluwer International Series in Video Computing)
Smith M., Kanade T., Kluwer Academic Publishers, Norwell, MA, 2004. Type: Book (9781402074264)
Apr 22 2005
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy