Robust and Efficient Multi-target Video Tracking Based on Multi-feature Fusion and State Estimation Fusion

Robust and Efficient Multi-target Video Tracking Based on Multi-feature Fusion and State Estimation Fusion
Title Robust and Efficient Multi-target Video Tracking Based on Multi-feature Fusion and State Estimation Fusion PDF eBook
Author Howard Wang
Publisher
Pages 191
Release 2017
Genre Automatic tracking
ISBN

Download Robust and Efficient Multi-target Video Tracking Based on Multi-feature Fusion and State Estimation Fusion Book in PDF, Epub and Kindle

Video tracking occupies an extremely important position in computer vision, and it has been widely applied to military and civil fields. However, video tracking needs a large number of calculations due to complex image processing and computer vision algorithms. In addition, video tracking needs to face various complex scenarios which pose great challenges to the robustness of tracking algorithms. In this thesis, an efficient and robust multi-target video detection and tracking framework, which integrates automatic video target detection, multi-feature fusion based video target modelling, multi-target data association, video target management, state estimation fusion, and distributed multi-camera tracking, is presented. Firstly, an automatic, robust, and efficient target detection approach is proposed. The Canny edge detector and the simplified multi-scale wavelet decomposition are exploited to simultaneously extract the contour of targets. Also, efficient background modelling based on improved Gaussian mixture models (IGMMs) is investigated to implement background subtraction (BGS) and to segment the foreground. Compared with traditional GMM, IGMMs improves the initialization process and optimizes the background-pixel matching strategy by using the mesh-updating technique. In addition, three-consecutive-frame difference (TCFD) is integrated with the proposed IGMMs-based BGS to quickly locate video targets. Moreover, fast morphological operations are performed on monochrome foreground images to segment targets-of-interest and to extract corresponding contours. After that, multi-feature fusion-based target modelling is introduced to robustly describe video targets. The spatial colour distribution, rotation-and-scale invariant as well as uniform local binary pattern (RSIULBP) texture, and edge orientation gradients are calculated and fused to build a fused-feature matching matrix which is integrated into data associations to realize reliable and precise multi-target tracking. In addition, low-dimensional regional covariance matrices-based multi-feature fusion is exploited to improve the matching degree of targets in single target tracking. Parallel computing based on multi-threaded synchronization is employed to boost the efficiency of feature extraction and fusion. An accurate and efficient multi-target data association method that integrates an improved probabilistic data association (IPDA) and a simplified joint probabilistic data association (SJPDA) is designed in this study. IPDA combines the augmented posterior probability matrix with the fused-feature matching matrix to perform multi-target associations. SJPDA ensures the efficiency of data associations and yields a better accuracy in the presence of low PSNR and sparse targets by sifting out big probability events. In order to record and update target trajectories, as well as increase the accuracy of multi-target tracking, a video target management scheme is presented. The states throughout the whole lifecycle of targets are defined and analysed. Meanwhile, a prediction interpolation-based data recovery approach is discussed to restore missed measurements. Afterwards, a flexible and extensible data structure is designed to encapsulate target states at each time step. Variable-length sequence containers are exploited to store existing targets, newly appearing targets, and targets which have disappeared. The switching criterion of target states is discussed. To quickly and robustly estimate the motion states of rigid targets, mixed Kalman/ H∞ filtering based on state covariances fusion and state estimates fusion is proposed. The H∞ filter makes no assumptions about process and measurement noise, and it has similar recursive equations to the Kalman filter. Thus, it is more robust against non-Gaussian noise. The mixed Kalman/H∞ filter can guarantee both the efficiency and robustness of state estimations under uncertain noise. To predict the state of high-maneuvering targets, mixed extended Kalman/particle filtering is introduced. The extended Kalman filter is able to linearize system dynamic models using Taylor series expansion. Hence it can implement a slightly nonlinear state estimation. An improved sequential importance resampling particle filtering is discussed to estimate target states in the case of strong nonlinearity and dynamic background. The mixed extended Kalman/particle filtering is performed by feeding the state output of the extended Kalman filter back to the particle filter to initialize the deployment of particles. Compared with single-camera video tracking, multi-camera tracking retrieves more information about the targets-of-interest from different perspectives and can better solve the problem of target occlusions. A multi-camera cooperative tracking strategy is investigated and a relay tracking scheme based on improved Camshift is proposed. To further extend the scope of tracking, a distributed multi-camera video tracking and surveillance (DMVTS) system based on hierarchical centre management modules is developed.

Feature-Based Probabilistic Data Association for Video-Based Multi-Object Tracking

Feature-Based Probabilistic Data Association for Video-Based Multi-Object Tracking
Title Feature-Based Probabilistic Data Association for Video-Based Multi-Object Tracking PDF eBook
Author Grinberg, Michael
Publisher KIT Scientific Publishing
Pages 296
Release 2018-08-10
Genre Electronic computers. Computer science
ISBN 3731507811

Download Feature-Based Probabilistic Data Association for Video-Based Multi-Object Tracking Book in PDF, Epub and Kindle

This work proposes a feature-based probabilistic data association and tracking approach (FBPDATA) for multi-object tracking. FBPDATA is based on re-identification and tracking of individual video image points (feature points) and aims at solving the problems of partial, split (fragmented), bloated or missed detections, which are due to sensory or algorithmic restrictions, limited field of view of the sensors, as well as occlusion situations.

Robust Video Object Tracking in Distributed Camera Networks

Robust Video Object Tracking in Distributed Camera Networks
Title Robust Video Object Tracking in Distributed Camera Networks PDF eBook
Author Younggun Lee
Publisher
Pages 84
Release 2017
Genre
ISBN

Download Robust Video Object Tracking in Distributed Camera Networks Book in PDF, Epub and Kindle

We propose a robust video object tracking system in distributed camera networks. The main problem associated with wide-area surveillance is people to be tracked may exhibit dramatic changes on account of varied illuminations, viewing angles, poses and camera responses, under different cameras. We intend to construct a robust human tracking system across multiple cameras based on fully unsupervised online learning so that the camera link models among them can be learned online, and the tracked targets in every single camera can be accurately re-identified with both appearance cue and context information. We present three main parts of our research: an ensemble of invariant appearance descriptors, inter-camera tracking based on fully unsupervised online learning, and multiple-camera human tracking across non-overlapping cameras. As for effective appearance descriptors, we present an appearance-based re-id framework, which uses an ensemble of invariant features to achieve robustness against partial occlusion, camera color response variation, and pose and viewpoint changes, etc. The proposed method not only solves the problems resulted from the changing human pose and viewpoint, with some tolerance of illumination changes but also can skip the laborious calibration effort and restriction. We take an advantage of effective invariant features proposed above in the tracking. We present an inter-camera tracking method based on online learning, which systematically builds camera link model without any human intervention. The aim of inter-camera tracking is to assign unique IDs when people move across different cameras. Facilitated by the proposed two-phase feature extractor, which consists of two-way Gaussian mixture model fitting and couple features in phase I, followed by the holistic color, regional color/texture features in phase II, the proposed method can effectively and robustly identify the same person across cameras. To build the complete tracking system, we propose a robust multiple-camera tracking system based on a two-step framework, the single-camera tracking algorithm is firstly performed in each camera to create trajectories of multi-targets, and then the inter-camera tracking algorithm is carried out to associate the tracks belonging to the same identity. Since inter-camera tracking algorithms derive the appearance and motion features by using single-camera tracking results, i.e., detected/tracked object and segmentation mask, inter-camera tracking performance highly depends on single-camera tracking performance. For single-camera tracking, we present multi-object tracking within a single camera that can adaptively refine the segmentation results based on multi-kernel feedback from preliminary tracking to handle the problems of object merging and shadowing. Besides, detection in local object region is incorporated to address initial occlusion when people appear in groups.

Multi-Sensor Information Fusion

Multi-Sensor Information Fusion
Title Multi-Sensor Information Fusion PDF eBook
Author Xue-Bo Jin
Publisher MDPI
Pages 602
Release 2020-03-23
Genre Technology & Engineering
ISBN 3039283022

Download Multi-Sensor Information Fusion Book in PDF, Epub and Kindle

This book includes papers from the section “Multisensor Information Fusion”, from Sensors between 2018 to 2019. It focuses on the latest research results of current multi-sensor fusion technologies and represents the latest research trends, including traditional information fusion technologies, estimation and filtering, and the latest research, artificial intelligence involving deep learning.

Visual Object Tracking using Deep Learning

Visual Object Tracking using Deep Learning
Title Visual Object Tracking using Deep Learning PDF eBook
Author Ashish Kumar
Publisher CRC Press
Pages 216
Release 2023-11-20
Genre Technology & Engineering
ISBN 1000990982

Download Visual Object Tracking using Deep Learning Book in PDF, Epub and Kindle

This book covers the description of both conventional methods and advanced methods. In conventional methods, visual tracking techniques such as stochastic, deterministic, generative, and discriminative are discussed. The conventional techniques are further explored for multi-stage and collaborative frameworks. In advanced methods, various categories of deep learning-based trackers and correlation filter-based trackers are analyzed. The book also: Discusses potential performance metrics used for comparing the efficiency and effectiveness of various visual tracking methods Elaborates on the salient features of deep learning trackers along with traditional trackers, wherein the handcrafted features are fused to reduce computational complexity Illustrates various categories of correlation filter-based trackers suitable for superior and efficient performance under tedious tracking scenarios Explores the future research directions for visual tracking by analyzing the real-time applications The book comprehensively discusses various deep learning-based tracking architectures along with conventional tracking methods. It covers in-depth analysis of various feature extraction techniques, evaluation metrics and benchmark available for performance evaluation of tracking frameworks. The text is primarily written for senior undergraduates, graduate students, and academic researchers in the fields of electrical engineering, electronics and communication engineering, computer engineering, and information technology.

Multitarget-multisensor Tracking

Multitarget-multisensor Tracking
Title Multitarget-multisensor Tracking PDF eBook
Author Yaakov Bar-Shalom
Publisher
Pages 615
Release 1995
Genre Radar
ISBN 9780964831209

Download Multitarget-multisensor Tracking Book in PDF, Epub and Kindle

Multi-sensor Multi-target Data Fusion, Tracking and Identification Techniques for Guidance and Control Applications

Multi-sensor Multi-target Data Fusion, Tracking and Identification Techniques for Guidance and Control Applications
Title Multi-sensor Multi-target Data Fusion, Tracking and Identification Techniques for Guidance and Control Applications PDF eBook
Author
Publisher
Pages 312
Release 1996
Genre Guided missiles
ISBN

Download Multi-sensor Multi-target Data Fusion, Tracking and Identification Techniques for Guidance and Control Applications Book in PDF, Epub and Kindle

Resumé på fransk.