版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
1、4100 英文單詞, 英文單詞,20500 英文字符,中文 英文字符,中文 6600 字文獻(xiàn)出處: 文獻(xiàn)出處:Chaudhary A, Raheja J L. Light invariant real-time robust hand gesture recognition[J]. Optik, 2018, 159: 283-294.Light invariant real-time robust hand gesture recogn
2、itionAnkit Chaudhary, J.L. RahejaAbstractComputer vision has spread over different domains to facilitate difficult operations. It works as the artificial eye for many industrial applications to observe elements, process,
3、 automation and to find defects. Vision-based systems can also be applied to normal human life operations but changing light conditions is a big problem for these systems. Hand gesture recognition can be embedded with ma
4、ny existing interactive applications/games to make interaction natural and easy but changing illumination and non-uniform backgrounds make it very difficult to perform operations with good image segmentation. If a vision
5、 based system is installed in public domain, different people are supposed to work on the application.This paper demonstrates a light intensity invariant technique for hand gesture recognition which can be easily applied
6、 to other vision-based applications also. The technique has been tested on different people in different light conditions with the extreme change in intensity. This was done as one skin color looks different in changed l
7、ight intensity and different skin colors may look same in changed light intensity. The orientation histogram was used to identify unique features of a hand gesture and it was compared using super- vised ANN. The overall
8、accuracy of 92.86% is achieved in extreme light intensity changing environments.Keywords: Gesture recognition ;Orientation histogram;Light intensity invariant systems ;Extreme change in light intensity ;Natural computing
9、;Robust skin detection1. IntroductionComputer vision applications have been part of industry operations for more than four decades. They are helpful in fasting the industrial process, automated many difficult tasks and a
10、lso help in finding minor defects [1]. Many applications were using hand gesture recognition techniques for different purposes as hand gesture provides a natural way to communicate with machines [2–5]. These applications
11、 were initially based on wired gloves, color strips or chemicals to detect a region of interest (ROI) smoothly. A survey of different devices and techniques used for hand gesture recognition can be found out in [6]. To m
12、ake human-machine communication more effective, gesture recognition of bare hand had introduced where any person could use his hand in natural position [7–10]. A lot of work has been done in the area of natural hand gest
13、ure recognition to make it more robust. Currently, this kind of applications [11] and games are more popular as a user feel comfortable and don’t need anything to operate the vision-based system.Recently there has been a
14、 growing interest in the field of light intensity-invariant object recognition. For advanced applications in this area, one can set up a system in the laboratory with ideal conditions. However, in practical scenarios, th
15、e vision systems may have applications in public domain where the daylight can also have effect. The light intensity may not be the same everywhere, hence a robust system that operates in all types of light conditions is
16、 required. The light intensity has a large impact on the visual image processing algorithms as the image segmentation depends on the colors in the image and these colors depends on light intensity at the time of image ca
17、pturing. If we are working with human skin detection then the scenario is not letters or characters, biological cells, electronics waveforms or signals, states of a system or any other items that one may desire to classi
18、fy. Any pattern recognition system consists of two components, namely feature transformation and classifier. The observation vector is first transformed into another vector whose components are called features. These are
19、 intended to be fewer in numbers than the observations collected, but must collectively represent most of the information needed for the classification of patterns. By reducing the observations to a smaller number of fea
20、tures, one may design a decision rule which is more reliable. For a given number of training samples, one can generally obtain more accurate estimates of the class conditional density functions and thus formulate a more
21、reliable decision rule [22]. In the past several methods have been used for feature extraction. Generally, the features are suggested by the nature of the problem. In image processing applications, the light intensity pl
22、ays an important role because it significantly affects the segmentation of the ROI from the original image frame. If the light intensity changes, then the threshold for skin filter also has to be changed. This motivates
23、the development of techniques that are applicable to different light intensities.Orientation Histogram (OH) technique for feature extraction was developed by McConnell [23]. The major advantage of this technique is that
24、it is simple and robust to lighting changes [24]. If we follow pixel-intensities approach, certain problems arise due to varying illumination [16]. If pixel by pixel proximity for the same gesture is taken from two diffe
25、rent images, while the illumination conditions are different, the distance between them would be large. In such scenarios, the picture itself acts as a feature vector. The main motivation for using the orientation histog
26、ram is the requirement for lightning and position invariance. Another important aspect of the gesture recognition is that irrespective of the orientation of the hand in different images, for the same gesture we must get
27、the same output. This can be done by forming a local histogram for local orientations [25]. Hence, this approach must be robust for illumination changes and it must also offer translational invariance.We would also need
28、the gestures to be the same regardless of where they occur in the image. The pixel levels of the hand would vary considerably with respect to light, on the other hand, the orientation values remain fairly constant. We ne
29、ed to calculate the local orientation from the direction of the image gradient. The local orientation angle & will be a function of position x and y, and the image intensities I(x, y). The angle & is defined as:θ
30、(x, y) = arctan[I(x, y)-I(x-1, y), I(x, y)-I(x, y-1)](1)Now form a vector Ф of N elements, with the ith element showing the number of orientation elementsθ(x, y) between the angles [i – 1/2 ] and [i + 1/2 ]. Where Ф is
31、defined as:360°𝑁360°𝑁3. Light invariant systemThe hand gesture recognition system works on the principle of the 2D computer vision. The system has an interface with a small camera which captures
32、 users’ gestures. The input to the system is image frame of moving hand in front of a camera captured as a live video. The preprocessing of image frame was done as discussed in [26] with real-time constraint. The resulti
33、ng image would be the ROI, only hand gesture image. Now we do need to find out feature vectors from the input image to recognize it with the help of classifier.As this system was for research purpose only, we took only s
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫(kù)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- [雙語(yǔ)翻譯]手勢(shì)識(shí)別外文翻譯--光照不變的實(shí)時(shí)魯棒手勢(shì)識(shí)別(英文)
- [雙語(yǔ)翻譯]手勢(shì)識(shí)別外文翻譯--光照不變的實(shí)時(shí)魯棒手勢(shì)識(shí)別中英全
- 2018年手勢(shì)識(shí)別外文翻譯--光照不變的實(shí)時(shí)魯棒手勢(shì)識(shí)別
- 2018年手勢(shì)識(shí)別外文翻譯--光照不變的實(shí)時(shí)魯棒手勢(shì)識(shí)別.DOCX
- 2018年手勢(shì)識(shí)別外文翻譯--光照不變的實(shí)時(shí)魯棒手勢(shì)識(shí)別(英文).PDF
- [雙語(yǔ)翻譯]手勢(shì)識(shí)別外文翻譯--基于fpga的實(shí)時(shí)手勢(shì)識(shí)別
- 基于結(jié)構(gòu)光的手勢(shì)識(shí)別技術(shù)魯棒性研究.pdf
- 外文翻譯---人臉識(shí)別的魯棒回歸問(wèn)題
- 外文翻譯---人臉識(shí)別的魯棒回歸問(wèn)題
- 手勢(shì)識(shí)別引擎系統(tǒng)中的靜態(tài)手勢(shì)識(shí)別算法研究.pdf
- 光照魯棒人臉識(shí)別研究.pdf
- 基于視覺的實(shí)時(shí)手勢(shì)識(shí)別及其應(yīng)用.pdf
- 外文翻譯---人臉識(shí)別的魯棒回歸問(wèn)題(譯文)
- 外文翻譯---人臉識(shí)別的魯棒回歸問(wèn)題(英文)
- 基于視覺的實(shí)時(shí)手勢(shì)識(shí)別及應(yīng)用.pdf
- 實(shí)時(shí)視頻中的手勢(shì)軌跡識(shí)別及其應(yīng)用.pdf
- 基于邊界信息的旋轉(zhuǎn)不變靜態(tài)手勢(shì)識(shí)別算法研究.pdf
- [雙語(yǔ)翻譯]人臉識(shí)別外文翻譯—人臉識(shí)別技術(shù)綜述(節(jié)選)
- 基于matlab的手勢(shì)識(shí)別.rar
- 實(shí)時(shí)手勢(shì)識(shí)別在人機(jī)交互中的應(yīng)用.pdf
評(píng)論
0/150
提交評(píng)論