外文翻譯--三維坐標和顏色信息匹配的3d顏色傳感器_第1頁
已閱讀1頁,還剩12頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領

文檔簡介

1、<p><b>  中文2880字</b></p><p>  畢 業(yè) 設 計(論文) 外 文 翻 譯</p><p>  外文題目:Matching between 3D Coordinate and its Color Information in 3D Color Sensor

2、 </p><p>  中文題目:三維坐標和顏色信息匹配的3D顏色傳感器 </p><p>  學 院 名 稱: 電子與信息工程學院 </p><p>  專 業(yè): 電子信息工程 <

3、/p><p>  班 級: 電信114班 </p><p>  姓 名: 11401180419 </p><p>  指 導 教 師:

4、</p><p>  企 業(yè) 教 師: </p><p>  定稿日期: 2014年 12 月 30日</p><p>  三維坐標和顏色信息匹配的3D顏色傳感器</p><p>  摘要:可以通過分別適用于線性區(qū)分標定的3D傳感器和BP神經網絡法的顏色傳感器,來校

5、準三維(3D)坐標與顏色傳感器測量的顏色信息之間的匹配關系。校準的過程主要包括公式計算、求解過程以及信息匹配方法的詳細討論。標定實驗結果表明,采用線性分區(qū)標定的3D傳感器其平均測量相對誤差為0.26%,而利用BP神經網絡標定的顏色傳感器的測試精度可以達到0.5-0.6像素。基于該校準結果,真實的物體被測量并且獲得的三維色點云,可以真實生動的展現實物對象。</p><p>  關鍵詞:3D顏色傳感器;攝像機標定,信

6、息匹配,線性分區(qū)標定,BP神經網絡,色點云</p><p><b>  引言</b></p><p>  獲得一個實物的3D坐標和顏色信息是一種純數字化的研究。截至目前,各種基于不同原理的技術[1]被提出,并廣泛應用于許多領域,如CAD和CAM,逆向工程,快速原型,虛擬現實,人體工程學和文物保護等[2,3]。在這些技術中,非接觸式的光學方法,特別是結構光方法由于其簡單

7、的原理,快速的測量,不需要接觸以及精度高的特點變得越來越流行。</p><p>  關鍵部位是黑色和白色(B&W)相機和彩色攝像機的3D顏色傳感器可以將實物的顏色信息數字化。并且3D顏色傳感器數字化得到的3D坐標和顏色信息匹配可以由內部照相機來校準實現。為此,許多的校準技術被提了出來比如如直接線性變換法,滿量程非線性優(yōu)化法,兩階段法等[4,5]。不過多數的方法都因為相機過于復雜,從而使得模型總是需要被設置、許多照

8、相機的內在和外部參數需要進行運算,很有可能會造成一個不穩(wěn)定的求解過程。但實際上在許多應用中,光是在圖像空間中的點的坐標和它們的像素坐標之間的映射關系的運算已經是足夠的,照相機的許多內部和外部參數往往有多余的嫌疑?;谏鲜龅睦碛?,提出了可以通過分別適用于線性區(qū)分標定的3D傳感器和BP神經網絡法的顏色傳感器,來校準三維(3D)坐標與顏色傳感器測量的顏色信息之間的匹配關系。</p><p><b>  原理校

9、準和信息匹配</b></p><p>  2.1線性區(qū)分標定及其解決方法</p><p>  對象的空間坐標(XW,YW,ZW)和它們的對應像素坐標(Xf為,YF)之間的映射關系,可以成從圖像捕獲過程得到的形式如下面等式(1)所示的齊此坐標配制而成的矩陣方程。</p><p>  其中ρ是一個比例因子。顯而易見的我們可以從上面的公式(1)中發(fā)現,矩陣M中

10、包含了所有的映射信息。如果校準點的數目足夠,M完全可以通過由對象的空間坐標和它們的對應像素坐標創(chuàng)建的求解線性方程系統(tǒng)來確定。由此,可將方程(1)可以擴展為下面的等式。</p><p>  理論上,參數M11到參數M34都可以通過6個點來確定。然而在實際應用中,M34卻是一個需要幾十個點構建超定方程來減少誤差的特殊項。因此,當點的數量為N,2N時,方程可以用如公式(3)所示的構建在矩陣M基礎的最小二乘法來得到,并表

11、示。</p><p>  但是,若匹配校準只是簡單地按照上述方法進行計算,將會導致很多錯誤,因為該方法并沒有考慮到鏡頭等非線性因素的失真情況。所以另一種方法被提了出來:將整個圖像分割成幾部分,當然這也意味著該數據對空間點坐標及其相應的像素坐標也會被分為幾組,線性地選取各個部分分別施加到每一組數據對或圖像區(qū)域。由該方法可以得到若干轉換矩陣,它們在需要測量時將被用于某些基于區(qū)域劃分分類規(guī)則的數據輸入。這就是線性分配法

12、的基本概念,使用這種方法可以使測量誤差顯著減少。</p><p>  2.2 BP神經網絡標定技術</p><p>  BP神經網絡是單向傳輸的多層人工神經網絡。每一層都包含一個或多個節(jié)點,每個層的輸出只與下一層的輸入端連接,并沒有與別的任何層有節(jié)點的輸入和輸出關系。一個標準的網絡是由一個輸入層,一個或多個的隱藏層和一個輸出層構建而成的。</p><p>  對于輸

13、入節(jié)點來說,它們的輸出量是和它們的輸入量相等的。隱藏層和輸出層的行為模式可以被公式(4)描述。</p><p>  其中P是當前輸入樣本,wji是連接體重從節(jié)點i到節(jié)點j,OPI和OPJ是輸入和節(jié)點j的輸出。 fj是激勵功能應該是微無處不在BP神經網絡,所以使用S形函數總是偏好,如以下公式(5)所示。</p><p>  網絡的訓練過程開始訓練樣本的制備,它包含輸入樣本與理想的輸出樣本。過

14、程中定義了向前和向后兩個方向。在向前的方向時,各層的行為僅影響下一層,并且如果輸出不理想,過程中會變成向后方向,它沿著連接路徑和返回錯誤信號通過修改重量發(fā)送回輸入層。重復此過程,直到誤差滿足需求為止。在向后方向中,重量由如下公式(6)所示。</p><p>  其中,η是學習速率,其值應0和1之間進行選擇。當節(jié)點是在輸出層,以下定義被使用,否則等式(8)被使用。</p><p>  其中,

15、TPJ是理想的輸出,OPJ是實際產出。顏色傳感器是由BP神經網絡校準上述輸入和輸出選擇為的三維坐標的校準點的和其相應的二維象素座標為圖1示出。</p><p>  2.3信息匹配的過程</p><p>  如圖2所示,三維傳感器是由黑白/彩色CCD和能夠上下移動的線結構激光光線組合成。該光平面將與相交對象產生光的條紋。該測量過程被掃描的光條紋的物體上的過程中,并記錄在輪廓信息由3D傳感器。

16、水平方向和深度坐標由三維傳感器被記錄和垂直坐標將是從精確的機械掃描方式得到的,并且顏色傳感器將記錄的顏色信息對象。</p><p>  3D之間的匹配坐標和其顏色信息可通過以下過程來實現。3D傳感器的校準可以獲得不同分區(qū)的轉換矩陣,它可以把B&W CCD空間坐標為(XW,YW)轉換為光條紋的信息。與另一個坐標ZW接到掃描系統(tǒng)中,對象的整個三維信息可以由(XW,YW,度Zw)來表示。此外,3D之間的映射關系的坐標(

17、XW,YW,度Zw)及其相應的像素彩色圖像坐標得到了彩色CCD可以通過顏色傳感器校準獲得??梢詮恼娌噬珗D像得到RGB的像素值。所以,該方法如上所述能夠實現三維信息和其相應的顏色信息之間的相互匹配。</p><p><b>  實驗結果</b></p><p>  基于上述理論,共面和非共面的校準點可分別使用敏通公司制造的MTV-0360和73X11HP來校準和現實黑白

18、CCD和彩色CCD的分別。圖像捕獲板以640×480的分辨率采樣B&W,彩色CCD用8毫米鏡頭、768×576的分辨率采樣CCD,兩者的面積都為一個7毫米的鏡頭。激光的實現是采用了加拿大LASIRIS公司的SNF-501L670,其波長為670nm。</p><p>  黑白圖象是由大部分在1×4的形式內的電和4的區(qū)域劃分得到的變換矩陣組成。為了驗證校準精度,進行了一次以為1

19、49.50毫米為標準值的結果測定。發(fā)現四個圖像被捕獲的平均相對誤差僅為0.26%。</p><p>  雙層(不考慮輸入層)BP神經網絡的六個節(jié)點的隱藏層采用校準顏色傳感器。校準點和檢測點的數目本別是60和48,前者在x方向上的平均絕對誤差為0.61個像素,后者在y方向上平均絕對誤差為0.55像素。</p><p>  對粘貼黃色,紅色和綠色的紙真實三維彩色實物進行基于黑白CCD和彩色CC

20、D垂直掃描系統(tǒng)所捕獲的實驗結果。圖3是物體的彩色圖像;圖4是三維點云計算由變換矩陣和圖片系列;圖5是從顏色傳感器校準以及信息獲得的3D彩色點云的匹配過程。它可以真實,生動地代表物體的3D和顏色信息。</p><p><b>  結論</b></p><p>  從理論分析上述的實驗結果顯示,該基于攝像機標定的匹配技術在一定程度上是可行的,而且也有令人滿意的精度和效果。

21、兩者的方法都不需要預先設置相機的內在和外部參數,如比例因子和圖像中心的空間,但點坐標和其對應的像素坐標的數量卻是不夠的,為了保證高精度,仍需要多次的采樣。另外,在使用線性區(qū)分標定的情況下,數量和形式的分區(qū)應該由實際應用確定,而圖像可以由其他方式來劃分,如同心圓或矩形等。這種方法,在每一個分區(qū)的線性,甚至其他非線性校準都是可以使用的。因此,該方法可以有效解決信息匹配的問題,這為未來的3D彩色重建和紋理映射奠定了良好的基礎。</p&g

22、t;<p><b>  參考文獻</b></p><p>  [1] Sun Yuchen, Ge Baozhen, Zhang Yimo. Review for the 3D information measuring technology[J]. Journal</p><p>  of Optoelectronics ? Laser, 2004,

23、15(2): 248-254.(in Chinese)</p><p>  [2] Petrov M., Talapov A. et al. Optical 3D digitizers: bringing life to the virtual world. Computer Graphics</p><p>  and Applications, IEEE, 1998, 18(3):28

24、-37.</p><p>  [3] Borghese N.A., Ferrigno G. et al. Autoscan: a flexible and portable 3D scanner. Computer Graphics and</p><p>  Applications, IEEE, 1998, 18(3):38–41.</p><p>  [4]

25、Roger, Y. Tsai. A versatile camera calibration technique for high-accuracy 3D machine vision metrology</p><p>  using off-the-shelf TV cameras and lenses[J]. IEEE Journal of Robotics and Automation,1987,<

26、/p><p>  RA-3(4):323-344.</p><p>  [5] W. Faig. Calibration of close-range Photogrammetry systems: Mathematical formulation.</p><p>  Matching between 3D Coordinate and its Color Infor

27、mation in 3D Color Sensor</p><p>  GE Bao-zhen, SUN Yu-chen, MU Bing, SUN Ming-rui, LV Qie-ni</p><p>  College of Precision Instrument and Opto-electronics Engineering, Tianjin University, Tianj

28、in 30072, China Key Laboratory of Opto-electronics Information and Technical Science (Tianjin University), Ministry of Education, Tianjin 30072, China</p><p><b>  ABSTRACT</b></p><p>

29、;  The matching between three dimensional (3D) coordinate and its color information in 3D color sensor is realized by the calibration technique, which applies Linear Partition Method and BP Neural Network Method to</p

30、><p>  3D sensor and color sensor respectively. The principle of the calibration technique includes the formula and solution procedure is deduced and the procedure of the information matching is discussed in de

31、tail. Calibration experiment results indicate that the use of Linear Partition Method to 3D sensor enables its measuring mean relative error to reach 0.26 percent and the use of BP Neural Network Method to color sensor e

32、nables its testing accuracy to reach 0.5-0.6 pixels. Based on the calibration</p><p>  Keywords: 3D color sensor; camera calibration, information matching, linear partition calibration, BP neural network, co

33、lor point cloud</p><p>  1. INTRODUCTION</p><p>  Acquisition of 3D and color information of real object is a research focus in digitization field. Up to now, different kinds of techniques based

34、 on different principles are proposed[1], which are widely applied in many fields such as CAD and CAM, reverse engineering, rapid prototype, virtual reality, human engineering and preservation of cultural relics and so o

35、n[2, 3]. Among these techniques, non-contact optical method, especially structured light method is become more and more popular due to i</p><p>  Real object can be digitized with color information by 3D col

36、or sensor whose key parts are black and white (B&W) cameras and color cameras. The matching between 3D coordinate and its color information in 3D color sensor can be realized by camera calibration. Many calibration t

37、echniques are proposed such as Direct Linear Transformation Method, Full-Scale Nonlinear Optimization Method, Two Stage Method and so on[4, 5]. In most of these methods, complicated camera model always need to be set up

38、and</p><p>  2. PRINCIPLE OF CALIBRATION AND INFORMATION MATCHING</p><p>  2.1 Linear partition method and its solution procedure</p><p>  The mapping relationship between object’s

39、space coordinates (XW, YW, ZW) and their corresponding pixels coordinates (Xf, Yf) got from image capture process can be formulated in matrix form with homogeneous coordinate as following equation.</p><p>&l

40、t;b>  ? Xw?</b></p><p><b>  ? Xw?</b></p><p><b>  ? X f ?</b></p><p><b>  ?m11</b></p><p><b>  m12</b></p>

41、;<p><b>  m13</b></p><p>  m14 ? ????</p><p><b>  ρ ? Y</b></p><p><b>  ? = ?mm</b></p><p>  mm ? ? ?Yw ? = M ? ?Y

42、w ?</p><p><b>  (1)</b></p><p>  ? f ?</p><p>  ? 2122</p><p><b>  2324 ?</b></p><p><b>  ? Zw?</b></p&

43、gt;<p><b>  ? Zw?</b></p><p><b>  ?? 1 ??</b></p><p><b>  ??m31</b></p><p><b>  m32</b></p><p><b>  m33<

44、/b></p><p>  m34 ?? ????</p><p><b>  ????</b></p><p>  Where ρ is a scale factor. Apparently, the matrix M contains all of the mapping information and if the&

45、lt;/p><p>  number of the calibration points is enough, M can be determined by solving a linear system of equations, which</p><p>  can be created by using calibration points’ 3D coordinates and th

46、eir corresponding image coordinates. Equation</p><p>  (1) can be expanded as following equation.</p><p>  ?m11 Xw + m12Yw + m13 Zw + m14 ? m31 X f Xw ? m32 X f Yw ? m33 X f Zw = m34 X f</p&g

47、t;<p><b>  ?</b></p><p>  ? m21 Xw + m22Yw + m23 Zw + m24 ? m31Y f Xw ? m32Y f Yw ? m33Y f Zw = m34Y f</p><p><b>  (2)</b></p><p>  Theoretically, p

48、arameters from m11 to m34 can be determined by 6 points. However, in practical application, m34 always be treated as one and dozens of calibration points are introduced to reduce the error by solving overdetermined equa

49、tions. So when the number of the points is N, 2N equations can be obtained and expressed as following and the matrix M can be got from least-squares procedure.</p><p><b>  ? Xwi</b></p>&l

50、t;p><b>  A = ?</b></p><p><b>  Ywi</b></p><p><b>  Zwi10</b></p><p><b>  Ax = B</b></p><p><b>  000</b&g

51、t;</p><p>  ? X fi Xwi</p><p><b>  ? X fiYwi</b></p><p>  ? X fi Zwi ?</p><p><b>  ?</b></p><p><b>  (3)</b></p>

52、<p><b>  ? 00</b></p><p><b>  00Xwi</b></p><p><b>  Ywi</b></p><p><b>  Zwi1</b></p><p>  ? Y fi Xwi</p&

53、gt;<p><b>  ? Y fiYwi</b></p><p>  ? Y fi Zwi ?</p><p><b>  X = [m11</b></p><p><b>  m12</b></p><p><b>  m13</b>

54、</p><p><b>  m14</b></p><p><b>  m21</b></p><p><b>  m22</b></p><p><b>  m23</b></p><p><b>  m24</

55、b></p><p><b>  m31</b></p><p><b>  m32</b></p><p><b>  m33 ]</b></p><p>  It doesn’t take into consideration of the distort

56、ion of the lens and other nonlinear factors during above discussion and calibration technique simply based on this method will cause much error. So another method is proposed, which divides the whole image into

57、several parts and that also means the data pairs (space points’ coordinates and their corresponding pixels’ coordinates) are divided in to several sets, linear method mentioned above will be applied to each set of data p

58、airs </p><p>  2.2 BP neural network calibration technique</p><p>  BP Neural Network is a one-way transmission and multi-layer artificial network. Every layer contains one or more nodes and the

59、 output of each layer is only connect with the input of the next layer and have no relationship with the nodes of other layers and itself. A standard network is composed of one input layer, one or more hidden layers and

60、one output layer.</p><p>  For input nodes, their output is just the same as their input, and for hidden layer and output layer their behavior can be described as following.</p><p>  net pj = ∑

61、 w ji o pi</p><p><b>  i</b></p><p><b>  o pj =</b></p><p>  f j (net pj )</p><p><b>  (4)</b></p><p>  Where p is pr

62、esent input sample, wji is the connection weight from node i to node j, opi and opj are the input and output of node j. fj is the excitation function which should be differentiable everywhere in BP Neural Network, so fu

63、nctions with S shape is always preference, such as following.</p><p>  f = 1/(1 + e ? x )</p><p><b>  (5)</b></p><p>  The training process of the network begins with t

64、he preparation of training sample that contains input sample and ideal output sample. There are two directions defined as forward and backward direction in the process. In</p><p>  forward direction, the beh

65、avior of each layer only has influence on the next layer and if the output is not ideal, the process will turn into backward direction, which return the error signal along the connection paths and transmit it to input la

66、yer by modifying the weight. Repeat this process until the error meets the demand. In backward direction, the weight is adjusted by this formula.</p><p><b>  ? pω ji</b></p><p>  = η

67、δ pj o pi</p><p><b>  (6)</b></p><p>  Where η is learning rate and its value should be selected between zero and one. When the nodes are in output</p><p>  layer, follo

68、wing definition is used, otherwise equation (8) is used.</p><p><b>  δ pj</b></p><p>  = (t pj ? o pj ) f ' j (net pj )</p><p><b>  (7)</b></p>&

69、lt;p><b>  δ pj =</b></p><p>  f ' j (net pj )∑δ pk ω kj</p><p><b>  k</b></p><p><b>  (8)</b></p><p>  Where tpj is ideal ou

70、tput and opj is real output.</p><p>  Color sensor is calibrated by the BP Neural Network mentioned above and the input an output are selected as the 3D coordinates of the calibration points and their corres

71、ponding 2D pixel coordinates as Fig.1 shows.</p><p><b>  Xw</b></p><p><b>  Yw</b></p><p><b>  Zw</b></p><p>  Input Layer</p>

72、<p><b>  …</b></p><p>  Hidden Layers</p><p><b>  Xf</b></p><p><b>  Yf</b></p><p>  Output Layer</p><p><b&g

73、t;  Laser</b></p><p><b>  B&W CCD</b></p><p><b>  Object</b></p><p><b>  Color CCD</b></p><p>  Fig.1 BP Neural Network mo

74、del adopted by color sensor</p><p>  Fig.2 Schematic diagram of 3D color sensor</p><p>  2.3 Information matching process</p><p>  As Fig.2 shows, 3D sensor is composed of B&W C

75、CD and line-structured laser light and moves up and down, color CCD is color sensor. The light plane will intersect with the object and generate a light stripe. The measuring process is the process of scanning the light

76、stripe on the object and record the contour information by 3D sensor. The horizontal and depth coordinate are recorded by 3D sensor and vertical coordinate will be got from a precise mechanical scanning system, and color

77、 sensor will r</p><p>  The matching between 3D coordinate and its color information can be realized by the following process. Transformation matrixes of different partitions will be got from the calibration

78、 of 3D sensor, which can translate the light stripe information on the B&W CCD to space coordinate (XW, YW). With another coordinate Zw got from scanning system, the whole 3D information of the object can be expresse

79、d by (XW, YW, Zw). Furthermore, the mapping relationship between 3D coordinate (XW, YW, Zw) and its c</p><p>  3. EXPERIMENTAL RESULTS</p><p>  Based on the above theory, coplanar and non-coplan

80、ar calibration points are used to calibrate B&W CCD and color CCD respectively, which are MTV-0360 and 73X11HP made by MINTRON Company. The B&W CCD</p><p>  with a 6 mm lens is sampled by an image ca

81、pture board with a resolution of 640×480 and the color CCD with a</p><p>  8 mm lens is sampled by a resolution of 768×576. The laser is SNF-501L670 from LASIRIS Company in Canada,</p><p

82、>  the light is line-structured and the wavelength is 670nm.</p><p>  The B&W image is divided by the form of 1×4 at the area where most of the points distributes and four transformation matrix i

83、s obtained. In order to verify the calibration precision, an object with edge length</p><p>  149.50mm is measured according to the calibration results. Four images are captured and the mean relative error i

84、s 0.26% after computation.</p><p>  Double-layer (without considering the input layer) BP Neural Network with six nodes in hidden layer is adopted to calibrate color sensor. The number of calibration points

85、and testing points are 60 and 48 respectively. The mean absolute error in x direction is 0.61 pixels and 0.55 pixels in y direction.</p><p>  Real 3D color object made by pasting yellow, red and green paper

86、on a cylinder is measured based on the calibration results of B&W and color CCD and 30 B&W images are captured by a vertical scanning system. Fig.3 is the color image of the object; Fig.4 is the 3D point cloud ca

87、lculated by the transformation matrixes and the image series; Fig.5 is the 3D color point cloud obtained from color sensor calibration and information matching process. It can represent the 3D and color information of th

88、e o</p><p>  Fig.3 Color image of the objectFig.4 3D point cloud of the object</p><p>  Fig.5 3D color point cloud of the object</p><p>  4. CONCLUSION</p><p>  From t

89、he theory analysis and experimental results mentioned above, it indicates that the matching technique based on camera calibration is feasible and can give satisfying precision and results. Both of the methods do not need

90、 the camera’s intrinsic and outside parameters such as scale factor and image center, and the space point coordinate and its corresponding pixel coordinate is enough, which enable the methods easy to adopt and still high

91、 in precision. In addition, when Linear Partition Meth</p><p>  5. ACKNOWLEDGEMENTS</p><p>  This paper is supported by the National Nature Scientific Research Foundation (No.60277009).</p>

92、;<p>  REFERENCES</p><p>  [1]Sun Yuchen, Ge Baozhen, Zhang Yimo. Review for the 3D information measuring technology[J]. Journal of Optoelectronics ? Laser, 2004, 15(2): 248-254.(in Chinese)</p>

93、;<p>  [2]Petrov M., Talapov A. et al. Optical 3D digitizers: bringing life to the virtual world. Computer Graphics and Applications, IEEE, 1998, 18(3):28-37.</p><p>  [3]Borghese N.A., Ferrigno G.

94、et al. Autoscan: a flexible and portable 3D scanner. Computer Graphics and</p><p>  Applications, IEEE, 1998, 18(3):38–41.</p><p>  [4]Roger, Y. Tsai. A versatile camera calibration technique f

95、or high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses[J]. IEEE Journal of Robotics and Automation,1987, RA-3(4):323-344.</p><p>  [5]W.Faig.Calibrationofclose

96、-rangePhotogrammetrysystems:Mathematicalformulation.</p><p>  Photogrammetric Eng. Remote Sensing, 1975, 41:1479-1486.</p><p>  contact. Ge Baozhen, e-mail: gebz@tju.edu.cn; phone: 022

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內容的表現方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
  • 6. 下載文件中如有侵權或不適當內容,請與我們聯系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論