2023年全國(guó)碩士研究生考試考研英語(yǔ)一試題真題(含答案詳解+作文范文)_第1頁(yè)
已閱讀1頁(yè),還剩17頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、<p>  Robot visual design with the panorama synthesis function</p><p>  (2.School of Information Science and Technology , Tsinghua University, Beijing 100084,China) </p><p>  Abstract. Pano

2、rama synthesis is basis of the target tracking. Through the synthesis of panorama, can enlarge the visual range to capture for the cameras, in the big background, the suspicious target which needs to be tracked are more

3、easily detected, then selects the target, and Tracks it. In the existing target tracking of robot integrated machine, adds panorama synthesis function and the suspicious target selection function, the target tracking run

4、s only on selected target, make the tracking m</p><p>  Key words: panorama; synthetic algorithm; robot integrated machine; target tracking; interaction </p><p>  1.Introduction </p><

5、p>  In some fields, wide view and high resolution images or video is becoming more and more important, Such as structure of panoramic view in photographic surveying, background reconstruction technique in the field of

6、 video coding, Realization of panoramic video monitoring system and virtual environment construction in virtual display technology. In order to obtain a wide view scene image, people must by adjusting the camera focal le

7、ngth or the use of special equipment (Panorama wide-angle or fish-e</p><p>  According to different projection planes, panorama can be divided into cylindrical, spherical panorama, cubic panorama. Cylindrica

8、l panoramic image technology is now mature, but it is only suitable for image sequence stitching obtained from the camera’s one-dimensional rotation along the horizontal direction; cubic panorama research is deepening, b

9、ut effect is not as good as spherical true; spherical panorama is suitable to describe the large range scene, accord with people's habits of observati</p><p>  On the other hand,in the visual monitoring

10、system, in order to detect suspicious cases, need to segment the moving objects in the scene, obtain objects described, and then to pay attention to it suspicious whether or not. In some special scenes, also need to judg

11、e distance from the suspicious objects to a particular, such as not too close from some restricted zone, or need to make a warning. </p><p>  In many applications, there are some common features,one is requi

12、red to detect some objects in the video stream, the other one is analysis and tracking of video information, and selectively stores. People often only care about the video content when the accident occurred, whereas in t

13、he past various visual system , all the video information are preserved, which leads to a tremendous data redundancy, not only occupies a large space, but also decreases the efficiency sharply to manually find abnor</

14、p><p>  This research work was based on the above two aspects -- a panorama composition, to extend the camera saccadic range; target tracking, keep the target in the lens and has been captured. </p><

15、p>  2. System structure and scheme constraints </p><p>  2.1 System structure </p><p>  The robot vision mentioned in the system refers to camera mounted in application oriented Robot Integra

16、ted Machine (RIM) remote control scheme, the scheme structure diagram as shown in Figure 1[6]. </p><p>  Fig. 1 RIM remote control scheme system structure diagram   In Figure 1, the camera is an integrated

17、machine’s visual hardware, it is responsible for the entire field monitoring. According to camera information and feedback information from teleoperation platform, the operator can accurately grasp actual situation of RI

18、M, and control movement of RIM and camera. In which, realized application oriented dynamic target tracking. This article is based on this, to give RIM with a panorama composition</p><p>  2.2 scheme constrai

19、nts </p><p>  As first step of the visual system design, a panorama composition is the foundation of target tracking. Through panorama composition, can expand visual range captured by a camera, in the large

20、field of vision, the target needed to track are more easily detected. </p><p>  The panorama synthesis module realized in the system gets the video stream in two ways, the two ways are to capture real-time i

21、mage stream by the camera and AVI video file stored in the local. Through the algorithm, the system judges the position relationship between two adjacent frames, finishes the image mosaic, and real-time displays the curr

22、ent state of panorama on the user interface. </p><p>  The system agrees: after RIM movement, camera is fixed to the horizontal scanning direction (without considering the pitch angle), and thus greatly simp

23、lifies the calculation of panoramic image mosaic. Upper and lower boundaries of two adjacent frames are aligned in the horizontal direction, only needs to find matching points and to splice. Of course, in the splicing pr

24、ocess, the system real-time judges whether camera saccade direction changes. </p><p>  Thus, the design goal of panorama synthesis module is to use the efficient and accurate matching algorithm, to Synthetic

25、 image in real time, to judge camera saccade direction in real time, in order to determine whether needs to update part of panoramic image which has been completed stitching. Eventually the panorama is displayed on inter

26、active interface in real time. </p><p>  3. A panorama composition design </p><p>  3.1 Algorithm design </p><p>  A panorama composition module is one of the two basic function mod

27、ules in the system, the module function is to complete panorama synthesis of the video file, to broaden the field of view of the camera. The system designed two input methods for the module, they are to capture video fro

28、m a camera directly and read the local video file. If you choose mode to capture a camera video, but also video file can be written to the local hard disk. Similarly, panorama image files can be written to the loca</p

29、><p>  In this module operation, several states require to be judged in real-time, namely when video frames need to be judged, they are judged: firstly, to judge whether a system state is changed, the user can

30、switch states of the system in a panorama composition process, for example, to convert to target tracking function, without waiting for the end of reading video; secondly, to judge whether reading video is end or not, th

31、is is mainly designed for input mode of local video file, for input mode capt</p><p>  3.2 Module Realization </p><p>  This module uses the C++ language as a programming foundation, and combine

32、d OpenCV. OpenCV is an open source computer vision library, which is a series of C function and a small amount of C++ [8], it realized many common algorithms the image processing and computer vision aspects. </p>

33、<p>  In this paper, template matching algorithm used in a panorama composition process is simplification of spherical projection model, mainly used the gray-scale image matching algorithm. Matching algorithm based

34、on the gray projection [9] is that two-dimensional image gray values are projected respectively and transformed into two independent groups of data, then On the basis of one-dimensional data, begins image matching. As a

35、result of dimension reduction, greatly reduces the calculation amount i</p><p>  The most commonly application of RGB (red, green, blue) is the monitor system, it can display each color value for HSB, RGB, L

36、AB and CMYK color space. </p><p>  HSV (hue, saturation, value ) color space model corresponds to a subset of conical in the cylindrical coordinates in, Cone top surface corresponds to V=1. that is to say, t

37、he V axis in HSV model corresponds to the main diagonal in the RGB color space. The color on the cone top surface circumference, V=1, S=1, this color is pure. HSV model corresponds to the painter matching method. Artists

38、 obtain different colors from some color by the method of changing the color consistency and color depth, ad</p><p>  HSI color space is from the human visual system, it describes the color using the tone (H

39、ue), color saturation ( Saturation or Chroma ) and luminance ( Intensity or Brightness ). Therefore, in the HSI color space can greatly simplify workload of the image analysis and processing. HSI and RGB color space is d

40、ifferent notation of the same physical quantity, so there is a conversion relationship between them. </p><p>  Function prototype: void cvCvtColor( const CvArr* src, CvArr* dst, int code ); </p><p

41、>  SRC: input 8-bit,16-bit or 32-bit single precision floating point number image; </p><p>  DST: output 8-bit,16-bit or 32-bit single precision floating point number image; </p><p>  Code: c

42、olor space transformation, by the definition of the V_2 constant: </p><p>  a)RGBXYZ (CV_BGR2XYZ, CV_RGB2XYZ, CV_XYZ2BGR, CV_XYZ2RGB); </p><p>  b) RGBYCrCb (CV_BGR2YCrCb, CV_RGB2YCrCb, CV_YCrCb

43、2BGR, CV_YCrCb2RGB); </p><p>  c) RGB=>HSV (CV_BGR2HSV,CV_RGB2HSV); </p><p>  d) RGB=>Lab (CV_BGR2Lab, CV_RGB2Lab); </p><p>  e) RGB=>HLS (CV_BGR2HLS, CV_RGB2HLS); </p&g

44、t;<p>  f)Bayer=>RGB (CV_BayerBG2BGR, CV_BayerGB2BGR, CV_BayerRG2BGR, CV_BayerGR2BGR, CV_BayerBG2RGB, CV_BayerRG2BGR, CV_BayerGB2RGB, CV_BayerGR2BGR, CV_BayerRG2RGB, CV_BayerBG2BGR, CV_BayerGR2RGB, CV_BayerGB2B

45、GR). </p><p>  3.2.2 MatchTemplate comparing to a template and overlapping regions of the image function </p><p>  Function prototype: void cvMatchTemplate( const CvArr* image, const CvArr* temp

46、l, CvArr* result, int method ); </p><p>  The function slides across the whole image, With the specified method, compare the template with overlap region where image size is w ×h, and the results will b

47、e stored to the result.   Image: to search the image. It should be a single channel, the 8- bit or bits of 32- float image; </p><p>  Templ: search template, cannot be greater than the input image, and with

48、 the input image having the same data type; </p><p>  Result : comparison of outcome mapping image. Single channel,32- bit float. If the image is the W ×H and templ is the w× h, then result is ( W-

49、w+1) x ( H-h+1); </p><p>  Method: Specifies the matching method, as follows. </p><p>  a)method = CV_TM_SQDIFF: </p><p>  b)method = CV_TM_SQDIFF_NORMED: </p><p>  c)m

50、ethod = CV_TM_CCORR: </p><p>  d)method = CV_TM_CCORR_NORMED: </p><p>  e)method = CV_TM_CCOEFF: </p><p>  In which, </p><p>  (Specify here, template brightness=>0)

51、 </p><p>  (Specify here, patch brightness=>0) </p><p>  f)method = CV_TM_CCOEFF_NORMED: </p><p>  After function completes comparison, through the use of cvMinMaxLoc, find the g

52、lobal minimu(CV_TM_SQDIFF*)or Maximum value(CV_TM_CCORR* and CV_TM_CCOEFF*)。 </p><p>  This module was developed by the Java language, interface structure was built by using the Java Swing packet, use JNI in

53、terface to communicate and exchange data with bottom layer. Fig 3 is control interface for input and output parameters which the system can obtain. " Input Video " can obtain video by selecting from a camera or

54、 a local AVI file; " Frame " and " Panorama " is to select whether they need to be saved to the local computer ,that is to say whether the video captured from a video cam</p><p>  Fig.3 I

55、/O control interface </p><p>  4. Design of target tracking </p><p>  Target tracking in RIM scheme has been achieved, here is to increase the suspicious target selection function, according to

56、his observation, the operator can determine the suspicious target and select it. After the target is selected, The target is surrounded by a red oval-shaped frame, the operator put control state to tracking state, so he

57、can track the suspicious target, the target can be static or moving. Algorithm flow diagram is as shown in figure 4. </p><p>  Fig. 4 Target tracking algorithm flow chart </p><p>  The module ne

58、eds to have the mouse operation interface, in each frame operation it needs to judge target parameters. If the user selects a target again, the module needs to replace the original target tracking parameters. The main pr

59、ocessing part focused on judgments of the target location in the current frame, and process selected by Oval frame. Because the rate that a camera captures frame is very fast, but the target moving is random, with its un

60、certainty, and requires that the system has goo</p><p>  5.1 Panorama synthesis experiment </p><p>  Fig.5 is the beginning part image frames obtained from AVI (compression). </p><p&g

61、t;  Fig. 5 AVI part video frame sequence </p><p>  Panorama synthesis operation results for the above image sequence, as shown in Figure 6 below: </p><p>  Fig. 6 Panorama synthesis results map

62、</p><p>  Through many repeated test in similar to the above way, panorama synthesis module basically meets requirements of users, in real time and accuracy have reached better synthesis effect, but have som

63、e delay in the process of displaying results. In the synthesis process of the video rotation from left to right, displaying is no problem basically, because each display is only loaded with new added images; However, in

64、the synthesis process of rotation from right to left, because each time needs to up</p><p>  5.2 Target tracking experiment </p><p>  The experimental procedure was tested in the USB camera case

65、. </p><p>  Without selecting any objects with the mouse, it is as shown in figure 7. </p><p>  Fig. 7 Before selecting the target </p><p>  Figure8 is the show after selecting the

66、target object with the mouse, rings up the object (posters) with an oval frame. </p><p>  Fig. 8 After selecting the target </p><p>  Now move the camera angle, for target tracking experiment. I

67、n fact, It is for the purpose to camera automatic tracking and target moving. Here because of using USB camera, so can move the camera, so as to realize the relative motion. Figure 9 and Figure10 is respectively the came

68、ra left and right shift. </p><p>  Fig. 9 Camera rotation to the left </p><p>  Fig. 10 Camera rotation to the right </p><p>  Map with red oval frame is being tracked target (poste

69、rs). In the camera moving process, the system can always maintain the oval ring of the target, achieve the purpose of tracking. Because the use of the CamShift algorithm judges only on the target color information so as

70、to realize the tracking, so the tracking precision is not very high, it is more accurate only in the case of big color difference in above experiment.If the environment is very complex, and the color information is very

71、compli</p><p>  Through the above 2 experiments, the test results were analysed respectively. Experimental results show that, in robot vision part, a panorama composition and suspicious target function were

72、added Successfully, this makes the application more targeted. Although the system needs to be further improved and Practical, but through the experiment, we can find that system can satisfy the general demand in general

73、in not very complex situations. System research is still on going.   References </p><p>  1.David A.Forsyth, Jean Ponce. computer vision: a modern approach [M]. trans.Lin Xueyin, Wang Hong, et al. Beijing:

74、Publishing House of electronics industry, 2004 </p><p>  2. Xie Kai, Guo Heng, Zhang Tianwen. Image Mosaics technology [J]. Journal of electronics, 2004, 32 (4): 630-634. </p><p>  3. Szeliski R

75、. Image Mosaicing for telereality applications[J]. IEEE Computer Graphics and Applications, 1994, (6): 44-53. </p><p>  4. Li Yanli, Xiang Hui. Solid spherical panorama generation algorithm [J]. Journal of c

76、omputer-aided design and computer graphics, 2007, 19 ( 11): 1383-1398. </p><p>  5. Peleg S, Herman J. Panoramic Mosaics by Manifold Projection[J]. Proceedings of IEEE Computer Society Conference on CVPR, 19

77、97: 338-343. </p><p>  6. Wang Wenming. Application oriented robot machine remote control scheme design [OL]. [2012-05-10]. http://www.paper.edu.cn/index.php/default/releasepaper/content/201205-166/ </p&g

78、t;<p>  7. Richard O.Duda, Peter E.Hart, David G.Stork. pattern classification [M]. trans.Li Hongdong et al. Beijing: Mechanical Industry Press, 2003 </p><p>  8. Wu Qiqing. Eclipse program design cla

79、ssic tutorial [M]. Beijing: Metallurgical Industry Press, 2007 </p><p>  9. Ping Jie, Yin Runmin. A panorama image and its implementation of [J]. micro computer application, 2007, 33 ( 6): 59-62. </p>

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫(kù)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論