Digital Robust Vehicle Tracking and Detection

Abstract
Unmanned Aerial Vehicles have been used widely in the commercial and surveillance use in the recent year. Vehicle tracking from aerial video is one of commonly used application. In this paper, a self-learning mechanism has been proposed for the vehicle tracking in real time. The main contribution of this paper is that the proposed system can automatic detect and track multiple vehicles with a self-learning process leading to enhance the tracking and detection accuracy. Two detection methods have been used for the detection. The Features from Accelerated Segment Test (FAST) with Histograms of Oriented Gradient (HoG) method and the HSV colour feature with Grey Level Co-occurrence Matrix (GLCM) method have been proposed for the vehicle detection. Code Shoppy A Forward and Backward Tracking (FBT) mechanism has been employed for the vehicle tracking. The main purpose of this research is to increase the vehicle detection accuracy by using the tracking results and the learning process, which can monitor the detection and tracking performance by using their outputs. Videos captured from UAVs have been used to evaluate the performance of the proposed method. According to the results, the proposed learning system can increase the detection performance.

INTRODUCTION

Unmanned Aerial Vehicles (UAVs) have become a key research area in recent years in military and civilian applications, which has the advantages of small, lightweight, fast and easy deployment, as well as it can achieve “zero” casualties so it can be deploy at the extreme missions. Vehicle detection from UAVs has drawn a great attention in the researches such as automatic traffic monitoring, aerial surveillance and other security related applications. There are various challenges that UAVs could face, one of the main challenges of the detection and tracking is the target objects might change their shapes in the aerial images or sudden disappear and reappear during the tracking process. Thus, the detection and tracking process needs to handle various problems. First of all, the tracking and detection system have to be scale-invariant to the target which avoid the errors caused by the UAVs changing their altitude during tracking. Secondly, the rotationally invariant features should be considered as the UAV’s flight directions can change rapidly and unpredictable, which change the directions of the target’s movement. Furthermore, the illumination to the targets may vary depending on the flight directions of the UAVs and shooting angles to the targets, also, the blur problems could occurred by the UAVs’ shaking. Therefore, the transformation invariant is needed. Furthermore, the background confusions and targets occlusions may exist. Finally, the most important issue is the detection and tracking process have to be real-time. In this paper, a vehicle tracking and detection method with self-learning has been proposed as shown in Figure 1. In the input video, vehicles are detected automatically using the features extracted from Histogram of Oriented Gradients (HoG) [1] and Features from Accelerated Segment Test (FAST) [2] with Support Vector Machine (SVM) classifier [3]. It is assumed that the vehicle has higher density of corners than other objects in the environment so finding the distribution of corners should be the very first thing to narrow the area for further HoG processing. FAST corner detection method can quickly and accurately detect relevant corner points. Another detection method by using the Grey Level Co-occurrence Matrix (GLCM) with HSV colour feature has been used in order to prove that the proposed self-learning tracking method can increase the detection accuracy.

The proposed self-learning tracking system was inspired by the method of Tracking Learning Detection (TLD) in [4], which the TLD can track a single target. The proposed approach in this paper upgraded it to track multiple targets in real-time. It is assumed that both detection and tracking process could make errors so it is necessary to let them monitor each other. The TLD algorithm can monitor the tracking results by the detection results. In this paper, a Forward and Backward Tracking (FBT) mechanism has been proposed, which can self-check whether there is any errors in the tracking process by using the previous tracking results. Also, the FBT could monitor the detection results by comparing the similarity with the tracking results using the Scalar Invariant Feature

2422015 Seventh International Conference of Soft Computing and Pattern Recognition (SoCPaR 2015)Transform (SIFT) feature. The inspectors (positive and negative) have been developed for the error estimations. Furthermore, the FBT will also update the classification based on the tracking results for future detection use. Two measures have used for the FBT monitoring process. Firstly, it is assumed that when tracking a same target in a sequence frames, the features of the target should be very similar. Thus, FBT uses SIFT matching process in the tracking results along the tracking sequence. If the matching score between the current result and previous result is above the threshold, the FBT is tracking the same target. Conversely, if the matching score is below the threshold, the FBT will be considered another target has been tracked. The FBT has a tracked vehicle database (TVD) which stores the SIFT information about already tracked vehicles. The second measure is that when the FBT could not match with any targets in the TVD, the FBT will considered this result as a false positive. Once the FBT gives the tracking result, it will compare with the detection results in the following frame, which can decide whether the detection result is correct or not. All decisions will be saved to update the classification model for further detections. The SIFT matching method has been used because it has a considerable high matching performance with acceptable processing resources requirement. In the TVD, each vehicle has its own SIFT points’ descriptors which will be used in the matching process.