Exploiting Scene Context for On-line Object Tracking in Unconstrained Environments

Exploiting Scene Context for On-line Object Tracking in Unconstrained Environments
Author :
Publisher :
Total Pages : 0
Release :
ISBN-10 : OCLC:1041708476
ISBN-13 :
Rating : 4/5 ( Downloads)

Book Synopsis Exploiting Scene Context for On-line Object Tracking in Unconstrained Environments by : Salma Moujtahid

Download or read book Exploiting Scene Context for On-line Object Tracking in Unconstrained Environments written by Salma Moujtahid and published by . This book was released on 2016 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: With the increasing need for automated video analysis, visual object tracking became an important task in computer vision. Object tracking is used in a wide range of applications such as surveillance, human-computer interaction, medical imaging or vehicle navigation. A tracking algorithm in unconstrained environments faces multiple challenges : potential changes in object shape and background, lighting, camera motion, and other adverse acquisition conditions. In this setting, classic methods of background subtraction are inadequate, and more discriminative methods of object detection are needed. Moreover, in generic tracking algorithms, the nature of the object is not known a priori. Thus, off-line learned appearance models for specific types of objects such as faces, or pedestrians can not be used. Further, the recent evolution of powerful machine learning techniques enabled the development of new tracking methods that learn the object appearance in an online manner and adapt to the varying constraints in real time, leading to very robust tracking algorithms that can operate in non-stationary environments to some extent. In this thesis, we start from the observation that different tracking algorithms have different strengths and weaknesses depending on the context. To overcome the varying challenges, we show that combining multiple modalities and tracking algorithms can considerably improve the overall tracking performance in unconstrained environments. More concretely, we first introduced a new tracker selection framework using a spatial and temporal coherence criterion. In this algorithm, multiple independent trackers are combined in a parallel manner, each of them using low-level features based on different complementary visual aspects like colour, texture and shape. By recurrently selecting the most suitable tracker, the overall system can switch rapidly between different tracking algorithms with specific appearance models depending on the changes in the video. In the second contribution, the scene context is introduced to the tracker selection. We designed effective visual features, extracted from the scene context to characterise the different image conditions and variations. At each point in time, a classifier is trained based on these features to predict the tracker that will perform best under the given scene conditions. We further improved this context-based framework and proposed an extended version, where the individual trackers are changed and the classifier training is optimised. Finally, we started exploring one interesting perspective that is the use of a Convolutional Neural Network to automatically learn to extract these scene features directly from the input image and predict the most suitable tracker.


Exploiting Scene Context for On-line Object Tracking in Unconstrained Environments Related Books