Robust Video Object Tracking in Distributed Camera Networks

Robust Video Object Tracking in Distributed Camera Networks
Title Robust Video Object Tracking in Distributed Camera Networks PDF eBook
Author Younggun Lee
Publisher
Pages 84
Release 2017
Genre
ISBN

Download Robust Video Object Tracking in Distributed Camera Networks Book in PDF, Epub and Kindle

We propose a robust video object tracking system in distributed camera networks. The main problem associated with wide-area surveillance is people to be tracked may exhibit dramatic changes on account of varied illuminations, viewing angles, poses and camera responses, under different cameras. We intend to construct a robust human tracking system across multiple cameras based on fully unsupervised online learning so that the camera link models among them can be learned online, and the tracked targets in every single camera can be accurately re-identified with both appearance cue and context information. We present three main parts of our research: an ensemble of invariant appearance descriptors, inter-camera tracking based on fully unsupervised online learning, and multiple-camera human tracking across non-overlapping cameras. As for effective appearance descriptors, we present an appearance-based re-id framework, which uses an ensemble of invariant features to achieve robustness against partial occlusion, camera color response variation, and pose and viewpoint changes, etc. The proposed method not only solves the problems resulted from the changing human pose and viewpoint, with some tolerance of illumination changes but also can skip the laborious calibration effort and restriction. We take an advantage of effective invariant features proposed above in the tracking. We present an inter-camera tracking method based on online learning, which systematically builds camera link model without any human intervention. The aim of inter-camera tracking is to assign unique IDs when people move across different cameras. Facilitated by the proposed two-phase feature extractor, which consists of two-way Gaussian mixture model fitting and couple features in phase I, followed by the holistic color, regional color/texture features in phase II, the proposed method can effectively and robustly identify the same person across cameras. To build the complete tracking system, we propose a robust multiple-camera tracking system based on a two-step framework, the single-camera tracking algorithm is firstly performed in each camera to create trajectories of multi-targets, and then the inter-camera tracking algorithm is carried out to associate the tracks belonging to the same identity. Since inter-camera tracking algorithms derive the appearance and motion features by using single-camera tracking results, i.e., detected/tracked object and segmentation mask, inter-camera tracking performance highly depends on single-camera tracking performance. For single-camera tracking, we present multi-object tracking within a single camera that can adaptively refine the segmentation results based on multi-kernel feedback from preliminary tracking to handle the problems of object merging and shadowing. Besides, detection in local object region is incorporated to address initial occlusion when people appear in groups.

Distributed Video Sensor Networks

Distributed Video Sensor Networks
Title Distributed Video Sensor Networks PDF eBook
Author Bir Bhanu
Publisher Springer Science & Business Media
Pages 476
Release 2011-01-04
Genre Computers
ISBN 0857291270

Download Distributed Video Sensor Networks Book in PDF, Epub and Kindle

Large-scale video networks are of increasing importance in a wide range of applications. However, the development of automated techniques for aggregating and interpreting information from multiple video streams in real-life scenarios is a challenging area of research. Collecting the work of leading researchers from a broad range of disciplines, this timely text/reference offers an in-depth survey of the state of the art in distributed camera networks. The book addresses a broad spectrum of critical issues in this highly interdisciplinary field: current challenges and future directions; video processing and video understanding; simulation, graphics, cognition and video networks; wireless video sensor networks, communications and control; embedded cameras and real-time video analysis; applications of distributed video networks; and educational opportunities and curriculum-development. Topics and features: presents an overview of research in areas of motion analysis, invariants, multiple cameras for detection, object tracking and recognition, and activities in video networks; provides real-world applications of distributed video networks, including force protection, wide area activities, port security, and recognition in night-time environments; describes the challenges in graphics and simulation, covering virtual vision, network security, human activities, cognitive architecture, and displays; examines issues of multimedia networks, registration, control of cameras (in simulations and real networks), localization and bounds on tracking; discusses system aspects of video networks, with chapters on providing testbed environments, data collection on activities, new integrated sensors for airborne sensors, face recognition, and building sentient spaces; investigates educational opportunities and curriculum development from the perspective of computer science and electrical engineering. This unique text will be of great interest to researchers and graduate students of computer vision and pattern recognition, computer graphics and simulation, image processing and embedded systems, and communications, networks and controls. The large number of example applications will also appeal to application engineers.

Camera Networks

Camera Networks
Title Camera Networks PDF eBook
Author Amit Roy-Chodhury
Publisher Morgan & Claypool Publishers
Pages 135
Release 2012-01-01
Genre Computers
ISBN 1608456757

Download Camera Networks Book in PDF, Epub and Kindle

As networks of video cameras are installed in many applications like security and surveillance, environmental monitoring, disaster response, and assisted living facilities, among others, image understanding in camera networks is becoming an important area of research and technology development. There are many challenges that need to be addressed in the process. Some of them are listed below: - Traditional computer vision challenges in tracking and recognition, robustness to pose, illumination, occlusion, clutter, recognition of objects, and activities; - Aggregating local information for wide area scene understanding, like obtaining stable, long-term tracks of objects; - Positioning of the cameras and dynamic control of pan-tilt-zoom (PTZ) cameras for optimal sensing; - Distributed processing and scene analysis algorithms; - Resource constraints imposed by different applications like security and surveillance, environmental monitoring, disaster response, assisted living facilities, etc. In this book, we focus on the basic research problems in camera networks, review the current state-of-the-art and present a detailed description of some of the recently developed methodologies. The major underlying theme in all the work presented is to take a network-centric view whereby the overall decisions are made at the network level. This is sometimes achieved by accumulating all the data at a central server, while at other times by exchanging decisions made by individual cameras based on their locally sensed data. Chapter One starts with an overview of the problems in camera networks and the major research directions. Some of the currently available experimental testbeds are also discussed here. One of the fundamental tasks in the analysis of dynamic scenes is to track objects. Since camera networks cover a large area, the systems need to be able to track over such wide areas where there could be both overlapping and non-overlapping fields of view of the cameras, as addressed in Chapter Two: Distributed processing is another challenge in camera networks and recent methods have shown how to do tracking, pose estimation and calibration in a distributed environment. Consensus algorithms that enable these tasks are described in Chapter Three. Chapter Four summarizes a few approaches on object and activity recognition in both distributed and centralized camera network environments. All these methods have focused primarily on the analysis side given that images are being obtained by the cameras. Efficient utilization of such networks often calls for active sensing, whereby the acquisition and analysis phases are closely linked. We discuss this issue in detail in Chapter Five and show how collaborative and opportunistic sensing in a camera network can be achieved. Finally, Chapter Six concludes the book by highlighting the major directions for future research. Table of Contents: An Introduction to Camera Networks / Wide-Area Tracking / Distributed Processing in Camera Networks / Object and Activity Recognition / Active Sensing / Future Research Directions

Distributed Multi-object Tracking with Multi-camera Systems Composed of Overlapping and Non-overlapping Cameras

Distributed Multi-object Tracking with Multi-camera Systems Composed of Overlapping and Non-overlapping Cameras
Title Distributed Multi-object Tracking with Multi-camera Systems Composed of Overlapping and Non-overlapping Cameras PDF eBook
Author Youlu Wang
Publisher
Pages 196
Release 2013
Genre Cameras
ISBN 9781303033025

Download Distributed Multi-object Tracking with Multi-camera Systems Composed of Overlapping and Non-overlapping Cameras Book in PDF, Epub and Kindle

Multiple cameras have been used to improve the coverage and accuracy of visual surveillance systems. Nowadays, there are estimated 30 million surveillance cameras deployed in the United States. The large amount of video data generated by cameras necessitate automatic activity analysis, and automatic object detection and tracking are essential steps before any activity/event analysis. Most work on automatic tracking of objects across multiple camera views has considered systems that rely on a back-end server to process video inputs from multiple cameras. In this dissertation, we propose distributed camera systems in peer-to-peer communication. Each camera in the proposed systems performs object detection and tracking individually and only exchanges a small amount of data for consistent labeling. With the lightweight and robust algorithms running in each camera, the systems are capable of tracking multiple objects in a real-time manner. The cameras in the system may have overlapping or non-overlapping views. With partially overlapping views, the object labels can be handed off between cameras based on geometric relations. Most camera systems with overlapping views attach cameras to PCs and communicate via Ethernet, which hinders the flexibility and scalability. With the advances in VLSI technology, smart cameras have been introduced. A smart camera not only captures images, but also includes a processor, memory and communication interface making it a stand-alone unit. We first present a wireless embedded smart camera system for cooperative object tracking and detection of composite events. Each camera is a CITRIC mote consisting of a camera board and a wireless mote. All the processing is performed on camera boards. Power consumption of the proposed system is analyzed based on the measurements of operating currents for different scenarios. On the other hand, in wide-area tracking applications, it is not always realistic to assume that all the cameras in the system have overlapping fields of view. Tracking across non-overlapping views present more challenges due to lack of spatial continuity. To address this problem, we present another distributed camera system based on a probabilistic Petri Net framework. We combine appearance features of objects as well as the travel-time evidence for target matching and consistent labeling across disjoint camera views. Multiple features are combined by adaptive weights, which are assigned based on the reliability of the features and updated online. We employ a probabilistic Petri Net to account for the uncertainties of the vision algorithms and to incorporate the available domain knowledge. Synchronization is another important problem for multi-camera systems, because it is essential to have the precise relevance between the video data captured by different cameras. We present a computationally efficient and robust method for temporally calibrating video sequences from unsynchronized cameras. As opposed to expensive hardware-based synchronization methods, our algorithm is solely based on video processing. This algorithm is to match and align the object trajectories using the Longest Consecutive Common Subsequence, and thus to recover the frame offset between video sequences. With the increasing number of cameras in the system, cost and flexibility are important factors to consider. The cost of each camera node increases with the increasing resolution of the image sensor. A possible way of employing low-cost low-resolution sensors to achieve higher resolution images is presented. In this system, four embedded cameras with low-resolution customized sensors are tiled in different arrangements. With the customized CMOS imager, we perform edge and motion detection on the focal plane, then stitch the four edge images together to get a higher-resolution edge map.

Camera Networks

Camera Networks
Title Camera Networks PDF eBook
Author Matthew Turk
Publisher Springer Nature
Pages 119
Release 2022-05-31
Genre Computers
ISBN 3031018117

Download Camera Networks Book in PDF, Epub and Kindle

As networks of video cameras are installed in many applications like security and surveillance, environmental monitoring, disaster response, and assisted living facilities, among others, image understanding in camera networks is becoming an important area of research and technology development. There are many challenges that need to be addressed in the process. Some of them are listed below: - Traditional computer vision challenges in tracking and recognition, robustness to pose, illumination, occlusion, clutter, recognition of objects, and activities; - Aggregating local information for wide area scene understanding, like obtaining stable, long-term tracks of objects; - Positioning of the cameras and dynamic control of pan-tilt-zoom (PTZ) cameras for optimal sensing; - Distributed processing and scene analysis algorithms; - Resource constraints imposed by different applications like security and surveillance, environmental monitoring, disaster response, assisted living facilities, etc. In this book, we focus on the basic research problems in camera networks, review the current state-of-the-art and present a detailed description of some of the recently developed methodologies. The major underlying theme in all the work presented is to take a network-centric view whereby the overall decisions are made at the network level. This is sometimes achieved by accumulating all the data at a central server, while at other times by exchanging decisions made by individual cameras based on their locally sensed data. Chapter One starts with an overview of the problems in camera networks and the major research directions. Some of the currently available experimental testbeds are also discussed here. One of the fundamental tasks in the analysis of dynamic scenes is to track objects. Since camera networks cover a large area, the systems need to be able to track over such wide areas where there could be both overlapping and non-overlapping fields of view of the cameras, as addressed in Chapter Two: Distributed processing is another challenge in camera networks and recent methods have shown how to do tracking, pose estimation and calibration in a distributed environment. Consensus algorithms that enable these tasks are described in Chapter Three. Chapter Four summarizes a few approaches on object and activity recognition in both distributed and centralized camera network environments. All these methods have focused primarily on the analysis side given that images are being obtained by the cameras. Efficient utilization of such networks often calls for active sensing, whereby the acquisition and analysis phases are closely linked. We discuss this issue in detail in Chapter Five and show how collaborative and opportunistic sensing in a camera network can be achieved. Finally, Chapter Six concludes the book by highlighting the major directions for future research. Table of Contents: An Introduction to Camera Networks / Wide-Area Tracking / Distributed Processing in Camera Networks / Object and Activity Recognition / Active Sensing / Future Research Directions

Robust and Efficient Multi-target Video Tracking Based on Multi-feature Fusion and State Estimation Fusion

Robust and Efficient Multi-target Video Tracking Based on Multi-feature Fusion and State Estimation Fusion
Title Robust and Efficient Multi-target Video Tracking Based on Multi-feature Fusion and State Estimation Fusion PDF eBook
Author Howard Wang
Publisher
Pages 191
Release 2017
Genre Automatic tracking
ISBN

Download Robust and Efficient Multi-target Video Tracking Based on Multi-feature Fusion and State Estimation Fusion Book in PDF, Epub and Kindle

Video tracking occupies an extremely important position in computer vision, and it has been widely applied to military and civil fields. However, video tracking needs a large number of calculations due to complex image processing and computer vision algorithms. In addition, video tracking needs to face various complex scenarios which pose great challenges to the robustness of tracking algorithms. In this thesis, an efficient and robust multi-target video detection and tracking framework, which integrates automatic video target detection, multi-feature fusion based video target modelling, multi-target data association, video target management, state estimation fusion, and distributed multi-camera tracking, is presented. Firstly, an automatic, robust, and efficient target detection approach is proposed. The Canny edge detector and the simplified multi-scale wavelet decomposition are exploited to simultaneously extract the contour of targets. Also, efficient background modelling based on improved Gaussian mixture models (IGMMs) is investigated to implement background subtraction (BGS) and to segment the foreground. Compared with traditional GMM, IGMMs improves the initialization process and optimizes the background-pixel matching strategy by using the mesh-updating technique. In addition, three-consecutive-frame difference (TCFD) is integrated with the proposed IGMMs-based BGS to quickly locate video targets. Moreover, fast morphological operations are performed on monochrome foreground images to segment targets-of-interest and to extract corresponding contours. After that, multi-feature fusion-based target modelling is introduced to robustly describe video targets. The spatial colour distribution, rotation-and-scale invariant as well as uniform local binary pattern (RSIULBP) texture, and edge orientation gradients are calculated and fused to build a fused-feature matching matrix which is integrated into data associations to realize reliable and precise multi-target tracking. In addition, low-dimensional regional covariance matrices-based multi-feature fusion is exploited to improve the matching degree of targets in single target tracking. Parallel computing based on multi-threaded synchronization is employed to boost the efficiency of feature extraction and fusion. An accurate and efficient multi-target data association method that integrates an improved probabilistic data association (IPDA) and a simplified joint probabilistic data association (SJPDA) is designed in this study. IPDA combines the augmented posterior probability matrix with the fused-feature matching matrix to perform multi-target associations. SJPDA ensures the efficiency of data associations and yields a better accuracy in the presence of low PSNR and sparse targets by sifting out big probability events. In order to record and update target trajectories, as well as increase the accuracy of multi-target tracking, a video target management scheme is presented. The states throughout the whole lifecycle of targets are defined and analysed. Meanwhile, a prediction interpolation-based data recovery approach is discussed to restore missed measurements. Afterwards, a flexible and extensible data structure is designed to encapsulate target states at each time step. Variable-length sequence containers are exploited to store existing targets, newly appearing targets, and targets which have disappeared. The switching criterion of target states is discussed. To quickly and robustly estimate the motion states of rigid targets, mixed Kalman/ H∞ filtering based on state covariances fusion and state estimates fusion is proposed. The H∞ filter makes no assumptions about process and measurement noise, and it has similar recursive equations to the Kalman filter. Thus, it is more robust against non-Gaussian noise. The mixed Kalman/H∞ filter can guarantee both the efficiency and robustness of state estimations under uncertain noise. To predict the state of high-maneuvering targets, mixed extended Kalman/particle filtering is introduced. The extended Kalman filter is able to linearize system dynamic models using Taylor series expansion. Hence it can implement a slightly nonlinear state estimation. An improved sequential importance resampling particle filtering is discussed to estimate target states in the case of strong nonlinearity and dynamic background. The mixed extended Kalman/particle filtering is performed by feeding the state output of the extended Kalman filter back to the particle filter to initialize the deployment of particles. Compared with single-camera video tracking, multi-camera tracking retrieves more information about the targets-of-interest from different perspectives and can better solve the problem of target occlusions. A multi-camera cooperative tracking strategy is investigated and a relay tracking scheme based on improved Camshift is proposed. To further extend the scope of tracking, a distributed multi-camera video tracking and surveillance (DMVTS) system based on hierarchical centre management modules is developed.

Wide Area Surveillance

Wide Area Surveillance
Title Wide Area Surveillance PDF eBook
Author Vijayan K. Asari
Publisher Springer Science & Business Media
Pages 245
Release 2013-11-19
Genre Technology & Engineering
ISBN 3642378412

Download Wide Area Surveillance Book in PDF, Epub and Kindle

The book describes a system for visual surveillance using intelligent cameras. The camera uses robust techniques for detecting and tracking moving objects. The real time capture of the objects is then stored in the database. The tracking data stored in the database is analysed to study the camera view, detect and track objects, and study object behavior. These set of models provide a robust framework for coordinating the tracking of objects between overlapping and non-overlapping cameras, and recording the activity of objects detected by the system.