版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡介
1、<p><b> 淮 陰 工 學(xué) 院</b></p><p> 畢業(yè)設(shè)計(jì)(論文)外文資料翻譯</p><p> 注:請將該封面與附件裝訂成冊。附件1:外文資料翻譯譯文</p><p><b> 《車牌識(shí)別》</b></p><p> 摘要:車牌號(hào)碼識(shí)別(LPR)的研究是一個(gè)相當(dāng)重
2、要的問題,這一系統(tǒng)已經(jīng)是商業(yè)運(yùn)作系統(tǒng)的幾個(gè)重要組成部分之一。然而許多類似系統(tǒng)需要復(fù)雜的視頻采集硬件并且需要與紅外閃光燈利用技術(shù)相結(jié)合,用以形成大尺寸車牌在某些區(qū)域和(人工)字符鑒別。在本文中,我們描述了一個(gè)車牌識(shí)別系統(tǒng),這一系統(tǒng)實(shí)現(xiàn)了高質(zhì)量的視頻信號(hào)分辨,具備較高的識(shí)別率而且不需要昂貴的硬件。我們也探討了汽車制造和模式識(shí)別問題,其目的在于搜尋用于部分車牌號(hào)碼監(jiān)控并與錄像檔案館聯(lián)合一些汽車視覺描述系統(tǒng)。我們的提出的方法將提供給民間基礎(chǔ)設(shè)施
3、寶貴的信息,并提供以各種情境為執(zhí)法對象的信息。</p><p><b> 1、簡介</b></p><p> 車牌識(shí)別問題(LPR)被廣泛認(rèn)為是與許多系統(tǒng)急待解決的問題之一。一些較為著名的是為倫敦交通擁擠而設(shè)置的收費(fèi)系統(tǒng),以及為美國海關(guān)邊境巡邏任務(wù)開發(fā)的系統(tǒng),還有在加拿大和美國的部分收費(fèi)公路用于執(zhí)法的收費(fèi)系統(tǒng) 。雖然向公眾發(fā)布了一些關(guān)于商業(yè)的準(zhǔn)確性細(xì)節(jié),但是部署的
4、車牌識(shí)別系統(tǒng)僅僅在可操作的條件下才能正常工作。因而,他們有兩個(gè)主要缺點(diǎn)是我們可以解決的:</p><p> 首先,他們需要高解析度成像,需要有專門的硬件。大多數(shù)的學(xué)術(shù)研究在這方面也需要高清晰度圖像或依賴于特殊地理位置的車牌,并考慮下到這些地區(qū)的人和實(shí)物因素,甚至是一些常見的誤讀字符和特殊字符。</p><p> 其次,具有一定性質(zhì)的LPR系統(tǒng)可以當(dāng)作是汽車的指紋車牌。換句話說,確定車輛
5、的身份完全基于附帶的車牌??梢韵胍姡@種情況下需要考慮兩個(gè)板塊從完全不同的品牌和型號(hào)的汽車被調(diào)換的情況,在這種情況下,這些系統(tǒng)將無法發(fā)現(xiàn)這一問題。我們作為分辨者也不太會(huì)容易分辨汽車的車牌號(hào)碼,除非汽車很接近我們的。我們也不能非常容易的記憶所有字符。然而,我們能夠識(shí)別和記憶的汽車外觀,,即使當(dāng)汽車正在加速離開我們。事實(shí)上,信息記憶表現(xiàn)出這樣了一種跡象,首先是汽車的品牌和型號(hào),只有那么它的車牌號(hào)碼,有時(shí)甚至不是一個(gè)完整的號(hào)碼。因此,考慮到汽
6、車的外觀描述和部分車牌號(hào),當(dāng)局應(yīng)該能夠查詢他們對類似的車輛監(jiān)控系統(tǒng)和檢索時(shí),該車輛失蹤與當(dāng)時(shí)存檔的錄像資料以及時(shí)間記號(hào)。</p><p> 在本文中,我們描述了一個(gè)車牌識(shí)別方法,執(zhí)行良好,而且不需要使用昂貴的成像硬件,同時(shí)可以用于探索汽車制造商和型號(hào)識(shí)別(MMR)。由于車牌的互補(bǔ)性和品牌和型號(hào)的信息不同,使用時(shí)需要考慮分辨能力,不僅具備高精度文字分辨能力,而且具備更強(qiáng)能力的汽車監(jiān)控系統(tǒng)。</p>
7、<p><b> 2、車牌偵測</b></p><p> 在任何物體識(shí)別系統(tǒng)中,有兩個(gè)需要加以解決的重大問題 - 即檢測到場景中的對,并且認(rèn)識(shí)它,檢測是一個(gè)重要的先決條件。我們進(jìn)入車牌檢測問題則作為一個(gè)文本提取的問題[5]。該檢測方法可作如下描述。窗口大約一個(gè)車牌圖像的尺寸,被安置在每個(gè)視頻流和幀的圖像內(nèi)容作為輸入傳遞給一個(gè)分類器,如果窗口包含一個(gè)車牌其輸出為1否則為0。然后把
8、放置在畫面中的車牌和候選地點(diǎn)的所有可能的地點(diǎn)記錄該在類并輸出一覽表。而實(shí)際上,這種分類,我們需要一個(gè)強(qiáng)大的分類器,存在許多弱分類,針對于不同功能的專門牌照,從而使每一個(gè)更為準(zhǔn)確的分辨出來。這種有效的分類器采用AdaBoost算法,因?yàn)樗鼈冎恍枰^50%的準(zhǔn)確度,AdaBoost的選擇最佳的現(xiàn)象并從弱分類設(shè)置,每一個(gè)單一的功能都能用弱分類器實(shí)現(xiàn)。</p><p> 3、品牌和型號(hào)識(shí)別 由于與車牌識(shí)別問
9、題,探測車的第一步,品牌和型號(hào)進(jìn)行識(shí)別(MMR)。為此,我們可以應(yīng)用運(yùn)動(dòng)分割方法估算的感興趣區(qū)域(ROI),其中包含了汽車。相反,我們決定使用作為存在和一個(gè)視頻流中車輛的位置顯示檢測到的車牌位置,并為作物識(shí)別汽車的投資回報(bào)率。這種方法也將是有益品牌和型號(hào)的靜態(tài)影像,那里的分割問題是比較困難的。</p><p><b> 3.1字符識(shí)別 </b></p><p>
10、應(yīng)用二值化算法,這是我們最初的意圖,如對尼布萊克算法的修改后的版本由陳和尤爾[5]用于提取牌照從我們的檢測器板的圖像,然后使用商業(yè)二元化的形象OCR的包作為輸入。我們發(fā)現(xiàn),即使我們嘗試在104 × 31的OCR軟件包決議也產(chǎn)生了非常糟糕的結(jié)果。也許這不應(yīng)該作為許多定制的OCR車牌識(shí)別系統(tǒng)在現(xiàn)有的解決方案考慮。</p><p> 除非在閱讀文字手寫的形式,然后對分割圖像進(jìn)行識(shí)別,這是與OCR軟件來分割共
11、同的特點(diǎn)。市場細(xì)分最簡單的方法通常涉及的行和列像素,并把部門在當(dāng)?shù)刈钚〉耐队肮δ艿耐队啊T谖覀兊臄?shù)據(jù)中,分辨率太低可依靠分割字符以這種方式解決,因此,我們決定采用模板匹配,而不是簡單的匹配,它可以同時(shí)找到兩個(gè)位置的人物和他們的身份。 該算法可描述如下:對于每個(gè)字符的例子,我們搜索在車牌圖像模板的圖像中所有可能的偏移量,并記錄前N最佳匹配。的搜索是通過使用標(biāo)準(zhǔn)化互相關(guān)(NCC)的,以及關(guān)于評分閾值的位置,然后才考慮一個(gè)??可能匹
12、配的應(yīng)用。如果有多個(gè)字符匹配一個(gè)地區(qū)的平均字符的大小,選擇具有較高的相關(guān)性特征,具有較低的相關(guān)性字符將被丟棄。一旦所有的模板已檢索,發(fā)現(xiàn)每個(gè)地區(qū)的特點(diǎn)是從左向右形成一個(gè)字符串。 N是對車牌圖像分辨率的依賴性,應(yīng)當(dāng)選擇恰當(dāng),并不是所有的N場比賽是在一個(gè)單一的字符相同的字符時(shí),多次出現(xiàn)在圖像里,所以,并非所有地區(qū)都可能處理。 這種方法似乎效率不高,但是,認(rèn)識(shí)過程的一類二階時(shí)的時(shí)間為104 × 31分辨率,我們是可以接受的這一范圍的
13、。 </p><p><b> 4、數(shù)據(jù)集 </b></p><p> 我們會(huì)自動(dòng)生成運(yùn)行在幾個(gè)小時(shí)的視頻數(shù)據(jù)檢測和跟蹤車牌和種植400 × 220像素周圍的每個(gè)跟蹤序列中的車牌架固定窗口??的大小的汽車圖像數(shù)據(jù)庫。這種方法產(chǎn)生的圖像,其中1,140輛各品牌和型號(hào)的大小都大致相同。該作物窗口位置,這樣的車牌是在底部的第三個(gè)中心的外形。我們選擇這個(gè)作為參考
14、點(diǎn),以確保匹配的正確性,完成的只有車的提取而不是背景的提取。如果我們?yōu)橹行牡能嚺瓶v向和橫向的汽車,車牌裝在他們的保險(xiǎn)杠會(huì)在圖像中出現(xiàn)道路的圖像。 </p><p> 在收集這些圖片,我們手動(dòng)指定品牌,型號(hào)和年份標(biāo)簽的1,140張圖片中的790。我們無法標(biāo)簽其余350張圖像由于我們與這些汽車不很熟悉的。我們經(jīng)常做的汽車的網(wǎng)站來確定汽車的制造和使用。該網(wǎng)站允許用戶輸入檢查汽車號(hào)碼檢測是否已通過最近的煙霧檢查。對于
15、每個(gè)查詢,該網(wǎng)站返回?zé)熿F的歷史以及汽車的品牌和型號(hào)說明如果可用。美國加利福尼亞州要求所有車輛超過三年以上才能通過煙霧檢查每二年。因此,對我們個(gè)人的經(jīng)驗(yàn)而言,他們依靠標(biāo)簽查詢汽車。 </p><p> 我們分成查詢設(shè)置1,140標(biāo)記圖像和數(shù)據(jù)庫設(shè)置。查詢集包含選擇代表了38多種品牌和型號(hào)的圖像類,與相同品牌和型號(hào),但不同年份多個(gè)查詢在某些情況下,為了捕捉隨著時(shí)間的推移變化的模型設(shè)計(jì)。我們評估在數(shù)據(jù)庫中找到的查詢每
16、個(gè)圖像的最佳匹配的識(shí)別方法每場演出。</p><p> 4.1 SIFT特征匹配 </p><p> 尺度不變特征變換(SIFT)特征洛韋最近開發(fā)的 是不變的規(guī)模,甚至部分不變旋轉(zhuǎn)和光照差異,這使得它們也適合用于識(shí)別物體。我們采用SIFT特征匹配的孕產(chǎn)婦死亡率問題如下:</p><p> 1.為每個(gè)圖像D的數(shù)據(jù)庫和查詢影像q時(shí),執(zhí)行關(guān)鍵點(diǎn)定位和描述符的任務(wù)。
17、2.對于每個(gè)數(shù)據(jù)庫圖像D: </p><p> ?。ㄒ唬τ诿恳粋€(gè)關(guān)鍵點(diǎn)克勤Q中找到關(guān)鍵點(diǎn)在D第納爾具有最小的L2距離,并至少有一個(gè)最近的描述距離較小的因素。如果沒有這樣的科威特第納爾存在,檢查下克勤的理論不成立。 </p><p> (二)計(jì)數(shù)的n的成功匹配在描述號(hào)碼d. </p><p> 3. ð選擇具有最大的N和認(rèn)為的最佳匹配。</p&g
18、t;<p><b> 5、結(jié)果 </b></p><p> 對SIFT匹配算法產(chǎn)生上述的查詢設(shè)置了89.5%的識(shí)別率。對于在集合測試一些疑問識(shí)別結(jié)果顯示在圖6。前10場比賽都是同一品牌和超過20的數(shù)據(jù)庫中的一些類似的模型車的所有疑問。 </p><p> SIFT特征匹配的查詢大部分無法正確分類有5個(gè)或更少的條目類似的數(shù)據(jù)庫了。對制造和相應(yīng)的查詢與
19、數(shù)據(jù)庫中的許多例子模型的結(jié)果,它是安全的假設(shè),擁有品牌和型號(hào),每類將提高識(shí)別率更多的例子。</p><p> 附件2:外文原文(復(fù)印件)</p><p> License plate recognition</p><p><b> Abstract</b></p><p> License Plate Reco
20、gnition (LPR) is a fairly well explored problem and is already a component of several commercially operational systems. Many of these systems, however, require sophisticated video capture hardware possibly combined with
21、infrared strobe lights or exploit the large size oflicense plates in certain geographical regions and the (artificially) high discriminability of characters. In this paper,we describe an LPR system that achieves a high r
22、ecognition rate without the need for a hig</p><p> 1 Introduction</p><p> License plate recognition (LPR) is widely regarded to be a solved problem with many systems already in operation.</
23、p><p> Some well-known settings are the London Congestion Charge program in Central London, border patrol duties by the U.S. Customs, and toll road enforcement in parts of Canada and the United States. Althoug
24、h few details are released to the public about the accuracy of commercially deployed LPR systems, it is known that they work well under controlled conditions. However, they have two main disadvantages which we address in
25、 this paper.</p><p> Firstly, they require high-resolution and sometimes specialized imaging hardware. Most of the academic research</p><p> in this area also requires high-resolution images o
26、r relies on geographically-specific license plates and takes advantage</p><p> of the large spacing between characters in those regions and even the special character features of commonly misread characters
27、.</p><p> Secondly, LPR systems by their nature treat license plates as cars’ fingerprints. In other words, they determine a vehicle’s identity based solely on the plate attached to it. One can imagine, how
28、ever, a circumstance where two plates from completely different make and model cars are swapped with malicious intent, in which case these systems would not find a problem. We as humans are also not very good at reading
29、cars’ license plates unless they are quite near us, nor are we very good at remember</p><p> In fact, the first bit of information Amber Alert signs show is the car’s make and model and only then its licens
30、e plate number, sometimes not even a complete number. Therefore, given the description of a car and a partial license plate number, the authorities should be able to query their surveillance systems for similar vehicles
31、and retrieve a timestamp of when that vehicle was last seen along with archived video data for that time. In this paper, we describe an LPR method that performs well w</p><p> 2 License Plate Detection</
32、p><p> In any object recognition system, there are two major problems that need to be solved – that of detecting an object in a scene and that of recognizing it; detection being an important requisite. We appr
33、oached the license plate detection problem as a text extraction problem [5]. The detection method can be described as follows. A window of interest, of roughly the dimensions of a license plate image, is placed over each
34、 frame of the video stream and its image contents are passed as input to a cla</p><p> 3 Make and Model Recognition</p><p> As with the license plate recognition problem, detecting the car is
35、the first step to performing make and model recognition (MMR). To this end, one can apply a motion segmentation method to estimate a region of interest (ROI) containing the car. Instead, we decided to use the location of
36、 detected license plates as an indication of the presence and location of a car in the video stream and to crop an ROI of the car for recognition. This method would also be useful for make and model recognition i</p&g
37、t;<p> 3.1 Character Recognition</p><p> It was our initial intent to apply a binarization algorithm, such as a modified version of Niblack’s algorithm as used by Chen and Yuille [5], on the extract
38、ed license plate images from our detector, and then use the binarized image as input to a commercial OCR package. We found, however, that even at a resolution of 104 × 31 the OCR packages we experimented with yielde
39、d very poor results. Perhaps this should not come as a surprise considering the many custom OCR solutions used in existing LPR s</p><p> Unless text to be read is in hand-written form, it is common for OCR
40、software to segment the characters and then perform recognition on the segmented image. The simplest methods for segmentation usually involve the projection of row and column pixels and placing divisions at local minima
41、of the projection functions. In our data, the resolution is too low to segment characters reliably in this fashion, and we therefore decided to apply simple template matching instead, which can simultaneously fi</p>
42、;<p> The algorithm can be described as follows. For each example of each character, we search all possible offsets of the template image in the license plate image and record the top N best matches. The searchin
43、g is done using normalized cross correlation (NCC), and a threshold on the NCC score is applied before considering a location a possible match. If more than one character matches a region the size of the average characte
44、r, the character with the higher correlation is chosen and the character w</p><p> This method may seem inefficient, however, the recognition process takes on the order of half a second for a resolution of
45、104 × 31, which we found to be acceptable.</p><p> 4 Datasets</p><p> We automatically generated a database of car images by running our license plate detector and tracker on several hour
46、s of video data and cropping a fixed window of size 400 × 220 pixels around the license plate of the middle frame of each tracked sequence. This method yielded 1,140 images in which cars of each make and model were
47、of roughly the same size. The crop window was positioned such that the license plate was centered in the bottom third of the image. We chose this position as a referenc</p><p> After collecting these images
48、, we manually assigned make, model, and year labels to 790 of the 1,140 images. We were unable to label the remaining 350 images due to our limited familiarity with those cars. We often made use of the California Departm
49、ent of Motor Vehicles’ web site to determine the makes and models of cars with which we were not familiar. The web site allows users to enter a license plate or vehicle identification number for the purposes of checking
50、whether or not a car has passed</p><p> We split the 1,140 labeled images into a query set and a database set. The query set contains 38 images chosen to represent a variety of make and model classes, in so
51、me cases with multiple queries of the same make and model but different year in order to capture the variation of model designs over time. We evaluated the performance of each of the recognition methods by finding the be
52、st match in the database for each of the query images.</p><p> 4.1 SIFT Matching</p><p> Scale invariant feature transform (SIFT) features recently developed by Lowe [14] are invariant to scal
53、e, rotation and even partially invariant to illumination differences, which makes them well suited for object recognition. We applied SIFT matching to the problem of MMR as follows: </p><p> 1. For each ima
54、ge d in the database and a query image q, perform keypoint localization and descriptor assignment.</p><p> 2. For each database image d:</p><p> (a) For each keypoint kq in q find the keypoint
55、 kd in d that has the smallest L2 distance to kq and is at least a factor of _ smaller than the distance to the next closest descriptor. If no such kd exists, examine the next kq.</p><p> (b) Count the numb
56、er of descriptors n that successfully matched in d. </p><p> 3. Choose the d that has the largest n and consider that the best match.</p><p><b> 5 Results</b></p><p>
57、 The SIFT matching algorithm described above yielded a recognition rate of 89.5% on the query set. Recognition results for some of the queries in the test set are shown in Figure 6. The top 10 matches were all of the sam
58、e make and model for some of the queries with over 20 similar cars in the database.</p><p> Most of the queries SIFT matching was not able to classify correctly had 5 or fewer entries similar to it in the d
59、atabase. Based on the results of queries corresponding to makes and models with many examples in the database, it is safe to assume that having more examples per make and model class will increase the recognition rate.&l
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫僅提供信息存儲(chǔ)空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 車牌字符識(shí)別畢業(yè)設(shè)計(jì)(含外文翻譯)
- 基于先驗(yàn)知識(shí)的車牌識(shí)別畢業(yè)論文外文翻譯
- 外文翻譯--基于人工神經(jīng)網(wǎng)絡(luò)的車牌照識(shí)別.doc
- 外文翻譯--基于人工神經(jīng)網(wǎng)絡(luò)的車牌照識(shí)別.doc
- 汽車車牌識(shí)別系統(tǒng)畢業(yè)論文(帶外文翻譯)
- 汽車車牌識(shí)別系統(tǒng)畢業(yè)論文(帶外文翻譯)
- 多類型車牌識(shí)別配置的方法畢業(yè)論文外文翻譯
- 汽車車牌識(shí)別系統(tǒng)畢業(yè)論文(帶外文翻譯)
- 車牌識(shí)別系統(tǒng)外文文獻(xiàn)
- 車牌識(shí)別
- 停車場管理系統(tǒng)車牌識(shí)別方案資料
- 車牌識(shí)別方案
- 車牌識(shí)別課程設(shè)計(jì)--車牌識(shí)別系統(tǒng)設(shè)計(jì)
- 外文資料翻譯
- 車牌識(shí)別系統(tǒng)
- 車牌識(shí)別論文
- 車牌識(shí)別課程設(shè)計(jì)--車牌識(shí)別的設(shè)計(jì)與實(shí)現(xiàn)
- 畢業(yè)設(shè)計(jì)----bp神經(jīng)網(wǎng)絡(luò)方法對車牌照字符的識(shí)別(含外文翻譯)
- 車牌識(shí)別原理說明
- 車牌識(shí)別論文
評論
0/150
提交評論