版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)
文檔簡介
1、Real Time Object Tracking based on Dynamic Feature Grouping with Background SubtractionZuWhan Kim California PATH, University of California at Berkeley, CA, USAhttp://path.berkeley.edu/?zuwhanAbstractObject detection and
2、 tracking has various application areas including intelligent transportation systems. We in- troduce an object detection and tracking approach that combines the background subtraction algorithm and the feature tracking a
3、nd grouping algorithm. We first present an augmented background subtraction algorithm which uses a low-level feature tracking as a cue. The resulting back- ground subtraction cues are used to improve the feature detectio
4、n and grouping result. We then present a dynamic multi-level feature grouping approach that can be used in real time applications and also provides high-quality tra- jectories. Experimental results from video clips of a
5、chal- lenging transportation application are presented.1. Introduction and Previous WorkObject detection and tracking is a major research area in computer vision. One of its application areas is traffic scene analysis. C
6、ameras are less costly and easier to install than most other sensors, so many are already installed on the roadside, particularly at intersections. Resultant video images are used to estimate traffic flows, to detect veh
7、icles and pedestrians for signal timing, and to track vehicles and pedestrians for safety applications. For decades various vehicle and pedestrian detection and tracking algorithms have been introduced, [15], [19], [16],
8、 [3], [6], [9], [14], [1], [18], and there are many commercial systems to detect vehicles (e.g., “virtual loop detectors”) and pedestrians. Most of the above systems (and also many of other object tracking applications)
9、are based on the back- ground subtraction algorithm. It first extracts a static back- ground hypothesis from a sequence of images, then calcu- lates a difference between the background hypothesis and the current image to
10、 find foreground objects. The background subtraction algorithm requires a rela- tively small computation time and shows robust detection in good illumination conditions. However, it suffers fromthe problems such as occlu
11、sions, the presence of shadows, and sudden illumination changes. Many efforts have been made to solve the occlusion problem, for example by ap- plying a Markov Random Field model [9], but the result has not been satisfac
12、tory with significant occlusions, as it is an extremely difficult problem to segment occluded objects by considering only the background subtraction result. Furthermore, it is difficult to deal with problems such as sudd
13、en illumination changes and stopped vehicles. For example, vehicles moving slowly in traffic congestions or stopped vehicles at an intersection will eventually be rec- ognized as background objects. In addition, the traj
14、ectories are usually determined by linking the center of the object blobs, and oftentimes it results in zigzaggy trajectories. Another approach is based on feature tracking and grouping, [3], [1]. This is done by extract
15、ing and tracking individual corner features, [7], and grouping them based on their trajectories. The grouping algorithms are applied to full trajectories of the corner features for post-processing applications. Since thi
16、s method uses a set of long trajecto- ries, segmentation among occluded objects is easier to per- form than with background subtraction. A challenge in applying this algorithm is that it is not always easy to robustly tr
17、ack a corner feature over a long period of time, especially when vehicles turn at intersec- tions or are occluded by other vehicles or pedestrians. In addition, keeping and processing a set of long corner trajec- tories
18、can be burdensome. For example, when a vehicle is waiting at an intersection for over a minute the corner fea- ture trajectories must be kept for more than 1000 frames. Therefore, these algorithms cannot be applied to re
19、al time detection and tracking applications, especially for intersec- tions. Another challenge is that the grouping algorithms often group nearby vehicles moving together or separate a large vehicle into two because corn
20、er features are not evenly distributed over the vehicles in many cases. Finally, there is an approach based on appearance-based vehicle detection [15], [19], [14]. Kim and Malik [14] in- troduced a model-based vehicle de
21、tection algorithm eventu- ally adopted by the NGSIM (Next Generation SIMulation)(a)(b)(c) Figure 1. (a) The entire scene suddenly becomes dark by an auto- iris camera as the two white vehicles in the bottom pass by. (b)
22、A disastrous detection result without the illumination correction. (c) An enhanced result with the illumination correction.For Mt, we start with the standard procedure which is to 1) threshold the difference, 2) apply mo
23、rphological op- erators (or threshold after over-smoothing), and 3) perform connected component analysis to fill holes, remove small regions, and find object blobs. After the object blobs are found, we apply an additiona
24、l validation step to remove the ghosts. We assume that within all the non-ghost foreground region there exists at least one valid corner, i.e., a corner feature which is not found from the background image. For more deta
25、ils on the valid corner, see Section 3. An example result is shown in Figure 1 where the il- lumination challenge caused by an auto-iris camera. The two white vehicles in the bottom changes the entire scene darker (Figur
26、e 1a) and it causes a significant false alarms (Figure 1b). However, the error is minimized by applying the illumination correction. The resulting object blobs are not the final result but they are used as supplementary
27、cues in the feature tracking and grouping which will be discussed in the next section.3. Feature Tracking and GroupingPrevious feature grouping work, [3], [1], groups corner features directly into objects using proximity
28、 and motion history. Such a single-level grouping is difficult and/or com- putationally expensive, especially when we deal with ob- jects of different scales (for example, bicycles, passenger cars, and trucks). For insta
29、nce, the distance between two corner features that belong to the same vehicle can be much larger than the two corner features that belong to two nearby vehicles, which can confuse the grouping algorithm. How- ever, when
30、we apply a sophisticated grouping algorithm to handle this it will bring in computational burden particu- larly when comparing long trajectories of corner features.To efficiently deal with the problem, we present a multi
31、- level grouping where individual corner features are first grouped into small clusters (“cluster features”) then the cluster features are grouped into object-level features. The grouping is performed on each frame (dyna
32、mic grouping) as opposed to the previous efforts, [3], [1], which applied the grouping algorithms on the final tracking results. Therefore, the proposed algorithm can be applied in real time. Note that one of our main go
33、als is to generate a trajec- tory of an object. Therefore, directly applying conventional grouping algorithms, such as K-means and the Normalized- Cut [17], frame-by-frame basis will not provide solid tra- jectories.3.1.
34、 Corner Feature TrackingThe lowest-level corner features are detected by find- ing the eigenvalues of the local sums of derivatives [7]. The corner features are only detected in the foreground re- gion which is estimated
35、 by the background subtraction algo- rithm. The detected corners are tracked by applying cross- correlation template matching on a small image patch (9×9 in our implementation). The search window sizes for the match
36、 vary from the applications but we first apply a large window (for example, 15 × 15) when the direction/speed of the corner is not known, then a small window (for example, 7 × 7) near the expected position esti
37、mated by the previous movement. The tracked corners are validated with comparison of the background image: another template matching search is performed on the background image with a small search window size (3 × 3
38、 in our implementation). When a cor- ner has a match in the background image it is considered invalid and removed. Such invalid corners are often gener- ated by errors by tracking failures such as drifting or errors in e
39、stimating the foreground region. We consider a corner feature ‘valid’ (see Section 2) when it is tracked over a number of (three in our implementation) consecutive frames, does not have a match in the back- ground image,
40、 and is neither moving or picked-up by an existing cluster (see Section 3.2). When a corner feature matching fails over a number of (five in our implementa- tion) consecutive frames it is no longer used. Corner fea- ture
41、s are detected in each and every frame, and those not overlapping with existing ones are subject to tracking. Note that a feature trajectory can comprise several thou- sand frames long in traffic video images due to a lo
42、ng sig- nal waiting times at signalized intersections. However, we do not need to keep the whole thousand frames of corner trajectories but just for, say, 20 recent frames in our imple- mentation because of our dynamic m
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 外文翻譯 --基于背景差值動態(tài)特征分組的實時目標(biāo)跟蹤
- 外文翻譯 --基于背景差值動態(tài)特征分組的實時目標(biāo)跟蹤
- 外文翻譯 --基于背景差值動態(tài)特征分組的實時目標(biāo)跟蹤(譯文)
- 譯文 --基于背景差值動態(tài)特征分組的實時目標(biāo)跟蹤.docx
- 譯文 --基于背景差值動態(tài)特征分組的實時目標(biāo)跟蹤.docx
- 2011年--外文翻譯--基于動態(tài)系統(tǒng)和卡爾曼濾波的目標(biāo)跟蹤算法(英文)
- 2011年--外文翻譯--基于動態(tài)系統(tǒng)和卡爾曼濾波的目標(biāo)跟蹤算法(英文).pdf
- 2011年--外文翻譯--基于動態(tài)系統(tǒng)和卡爾曼濾波的目標(biāo)跟蹤算法(英文).pdf
- [雙語翻譯]--外文翻譯--基于動態(tài)系統(tǒng)和卡爾曼濾波的目標(biāo)跟蹤算法
- [雙語翻譯]--外文翻譯--基于動態(tài)系統(tǒng)和卡爾曼濾波的目標(biāo)跟蹤算法外文翻譯
- 動態(tài)背景下的視頻目標(biāo)跟蹤.pdf
- 2011年--外文翻譯--基于動態(tài)系統(tǒng)和卡爾曼濾波的目標(biāo)跟蹤算法外文翻譯
- 目標(biāo)特征近似拓?fù)渫瑯?gòu)的實時視頻目標(biāo)跟蹤.pdf
- 基于特征點分類的實時多目標(biāo)檢測與跟蹤.pdf
- 2011年--外文翻譯--基于動態(tài)系統(tǒng)和卡爾曼濾波的目標(biāo)跟蹤算法
- 海天背景下弱小目標(biāo)實時捕獲跟蹤研究.pdf
- 基于FPGA的復(fù)雜背景下多目標(biāo)的實時檢測與跟蹤.pdf
- 復(fù)雜背景下的實時目標(biāo)跟蹤技術(shù)研究.pdf
- 2011年--外文翻譯--基于動態(tài)系統(tǒng)和卡爾曼濾波的目標(biāo)跟蹤算法(譯文)
- 外文翻譯--基于紅外紫外的多向報警及目標(biāo)跟蹤監(jiān)測的設(shè)計與實現(xiàn)(英文)
評論
0/150
提交評論