

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
1、<p> 外文標(biāo)題:Fast Facial Animation Design for Emotional Virtual Humans</p><p> 外文作者:S. Garchery, A. Egges, N. Magnenat-Thalmann</p><p> 文獻(xiàn)出處:CD-ROM Proceeding,2005</p><p> 英文1
2、780單詞, 9439字符,中文2889漢字。</p><p> 此文檔是外文翻譯成品,無需調(diào)整復(fù)雜的格式哦!下載之后直接可用,方便快捷!價(jià)格不貴。</p><p><b> 原文:</b></p><p> Fast Facial Animation Design for Emotional Virtual Humans</p&g
3、t;<p> S. Garchery, A. Egges, N. Magnenat-Thalmann</p><p><b> Abstract</b></p><p> Designing facial animation parameters according to a specific model can be time consuming
4、. In this paper we present a fast approach to design facial animations based on minimal information (only feature points). All facial deformations are automatically computed from MPEG-4 feature points. We also present an
5、 extension of this approach that allows to personalize or to customize the deformations according to different characteristics. We will describe different prototypes of the facial animation s</p><p><b>
6、; Keywords</b></p><p> Facial Animation, Parameterization, Personalization, Emotion, MPEG4</p><h3> Introduction</h2><p> A literature survey on different approaches on fa
7、cial animation system reveals the following characteristics that should be found in an ideal facial animation system:</p><p> Easy to use: A facial animation system should be easy to use and simply to imple
8、ment. This means:</p><p> Be able to work with any kind of face model (male, female, children, cartoon like …)</p><p> Require a minimum of time to set up a model for animation</p><
9、p> Allow for creative freedom of the animator to define specific deformations if necessary</p><p> Get realistic results</p><p> Be able to precisely control the animation Integration: Usi
10、ng the system should be simple, fast and it should work in any kind of environment (PC, web, mobile…) Generality: The possibility to reuse previous work (deformation data or animation) with a new model presents a big adv
11、antage and reduces the resources needed to develop new animations or applications.</p><p> Visual quality: finally, the result should look realistic with a cartoon like model or a cloned one. The quality sh
12、ould also be taken into account during the design process.</p><p> In order to achieve a maximum of these goals, it is crucial to properly define which parameters are used in the model. By proposing a param
13、eterization system (FACS), Paul Ekman [5] started the definition of a standardization system to be used for facial synthesis. In 1999, MPEG-4 defined an interesting standard using facial animation parameters. This standa
14、rd proposed to deform the face model directly by manipulating feature points of the face and presented a novel animation structure specifical</p><h4> MPEG-4 overview and description</h2><p>
15、In order to understand facial animation based on MPEG-4 parameters system, we should describe some keywords of the standard and the pipeline in order to animate compliant face models.</p><p> FAPU (Facial A
16、nimation Parameters Units): all animation parameters are described in FAPU units. This unit is based on face model proportions and computed based on a few key points of the face (like eye distance or mouth size).</p&g
17、t;<p> FDP (Facial Definition Parameters): this acronym describes a set of 88 feature points of the face model. FAPU and facial animation parameters are based on these feature points. These points could be also u
18、sed in order to morph a face model according to specific characteristics.</p><p> FAP (Facial Animation Parameters): it is a set of values decomposed in high level and low level parameters that represent th
19、e displacement of some features points (FDP) according to a specific direction. Two special values (FAP 1 and 2) are used to represent visemes and expressions. All 66 low level FAP values are used to represent the displ
20、acement of some FDPs according to a specific direction (see Figure 1). The combination of all deformations resulting from these displacements forms the fina</p><p> Another aspect of MPEG-4 for facial anima
21、tion, like Facial Interpolation Tables, could be applied to simplify the quantity of data needed to represent an expression or animation. With this approach, an animation can be represented by a small set of parameters,
22、which is an efficient approach for network applications (less than 2 Kb for each frame).</p><p> Animation Design</p><p> Different approaches are possible in order to produce a facial animati
23、on stream more or less in real-time depending on the intended application. In this section, we will briefly present some of these approaches.</p><h4> Text-to-visual approach</h2><p> When sta
24、rting from written text, we use a Text-to-Speech engine to produce the phoneme data and the audio. For defining the visemes and expressions, we use the Principal Components (PCs) as described by Kshirsagar et al. [10]. T
25、he PCs are derived from the statistical analysis of the facial motion data and reflect independent facial movements observed during fluent speech. The main steps incorporated in the visual front-end are the following:<
26、;/p><p> Generation of FAPs from text</p><p> Expression blending: Each expression is associated with an intensity value and it is blended with the previously calculated co-articulated phoneme tr
27、ajectories.</p><p> Periodic facial movements: Periodic eye-blinks and minor head movements are applied to the face for increased believability.</p><h4> Optical capture</h2><p>
28、 In order to capture a more realistic trajectory of feature points on the face, we are using a commercial optical tracking system to capture the facial motions, with 6 cameras and 27 markers corresponding to MPEG-4 FDPs.
29、 In the parameterization system, a total of 40 FDPs are animatable, but since markers are difficult to be set up on the tongue and lips, we use a subset of 27. We obtain 3D trajectories for each of the marker points as t
30、he output of the tracking system, suitable as well for 2D ani</p><p> Once we extract the head movements, the global movement component is removed from the motion trajectories of all the feature point marke
31、rs, resulting in the absolute local displacements. The MPEG-4 FAP values are then easily calculated from these displacements. For example, FAP 3 open jaw is defined as the normalized displacement of FDP</p><p&
32、gt; 2.1 from the neutral position, scaled by the FAPU MNS. As FAP values are defined as normalized displacements for the feature points from the neutral position, it is trivial to compute the FAP value given the neutral
33、 position and the displacement from this position [11]. The algorithm is based on a general purpose feature point based mesh deformation, which is extended for a facial mesh using MPEG-4 facial feature points.</p>
34、<h4> 3.3 Automatic personalization of animation</h2><p> As presented above, an optical tracking system can be used in order to capture spatial motion of marker placed on the face. This information
35、is converted in MPEG-4 FAPs that can be interpreted by any MPEG4 compliant facial animation engine. But during the conversion of motion capture data to FAP, we lose some information due of FAP restrictions (see section 2
36、.1). In other words, with the standard format we cannot fully recover the same displacement as we had from the motion capture. The displace</p><p> As described above, our system is able to deform a face mo
37、del according to the FDP position. This deformation is normally computed for FAP values in term of FAPU units. The system could move any FDP point and not only those points that are FAP values. In other words, we compute
38、 the deformation simply from the spatial position of control points. Thus we are able to deform a face model according to any the FDP position coming from FAP values or randomly. We propose then to apply a spring mass ne
39、tw</p><p> Another interesting idea with this approach is the possibility to personalize the value of the mass springs according to a specific person. The ultimate goal is to be able to automatically set th
40、ese spring mass parameters, and use these parameters in order to reproduce a more realistic animation with the same number of facial animation parameters. This research is ongoing.</p><p><b> Referenc
41、e</b></p><p> Arnold, M.B. (1960). Emotion and personality. Columbia University Press, New York.</p><p> Blostein, S.; Huang, T. (1988). Motion Understanding Robot and Human Vision. Kluw
42、er Academic Publishers, pp. 329-352.</p><p> Cornelius, R.R. (1996). The science of emotion. Research and tradition in the psychology of emotion. Prentice-Hall, Upper Saddle River (NJ).</p><p>
43、 Egges, A.; Kshirsagar, S.; Magnenat-Thalmann, N. (2004). Generic personality and emotion simulation for conversational agents. Computer Animation and Virtual Worlds, 15(1):1-13.</p><p> Ekman, P.; Friese
44、n, W.V. (1978). Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, California.</p><p> Ekman, P. (1982). Emotion in the human face. C
45、ambridge University Press, New York.</p><p> Elliott, C.D. (1992). The Affective Reasoner: A process model of emotions in a multiagent system. PhD thesis, Northwestern University.</p><p> Garc
46、hery, S.; Magnenat-Thalmann, N. (2001). Designing mpeg-4 facial animation tables for web applications. In Multimedia Modeling 2001, Amsterdam, pages 39-59.</p><p> Kim, J.W.; Song, M.; Kim, I.J; Kwon, Y.M;
47、Kim, H.G.; Ahn, S.C. (2000). Automatic fdp/fap generation from an image sequence. In ISCAS 2000 - IEEE International Symposium on Circuits and Systems.</p><p> S. Kshirsagar, T. Molet, N. Magnenat-Thalmann
48、(2001). Principal components of expressive speech animation. In Proceedings Computer Graphics International, pages 59- 69.</p><p> Kshirsagar, S.; Garchery, S.; Magnenat-Thalmann N. (2001). Feature Point Ba
49、sed Mesh Deformation Applied to MPEG-4 Facial Animation. Kluwer Academic Publishers, pp. 33-43.</p><p> Magnenat-Thalmann, N.; Thalmann D. (2004), Handbook of Virtual Human, Eds Wiley & Sons, Ltd., publ
50、isher, ISBN: 0-470-02316-3.</p><p> Noh, J.Y.; Fidaleo, D.; Neumann U. (2000). Animated deformations with radial basis functions. In VRST, pages 166-174.</p><p> Ortony, A.; Clore, G.L.; Colli
51、ns A. (1988). The Cognitive Structure of Emotions. Cambridge University Press.</p><p> Ostermann, J. (1998). Animation of synthetic faces in mpeg-4. In Computer Animation. Held in Philadelphia, Pennsylvania
52、, USA.</p><p> Pasquariello, S. ; Pelachaud C. (2001). Greta: A simple facial animation engine. In 6th Online World Conference on Soft Computing in Industrial Appications, Session on Soft Computing for Inte
53、lligent 3D Agents.</p><p> Plutchik, R. (1980). Emotion: A psychoevolutionary synthesis. Harper & Row, New York</p><p><b> 譯文:</b></p><p> 虛擬人臉部動(dòng)畫快速設(shè)計(jì)</p>
54、<p> S·咖奇里·A·埃格· N·馬格納特-塔爾曼</p><p><b> 摘要</b></p><p> .根據(jù)特定的模型來進(jìn)行臉部動(dòng)畫參數(shù)設(shè)計(jì)會(huì)耗費(fèi)大量的時(shí)間。在本文中,我們提出了一種基于最小信息(僅特征點(diǎn))進(jìn)行人臉動(dòng)畫快速設(shè)計(jì)的方法。所有臉部變形都是通過MPEG-4特征點(diǎn)自動(dòng)計(jì)算出來的。我們
55、還提出了這種方法的擴(kuò)展使用方法,即允許根據(jù)不同的特征將其個(gè)性化或定制變形。在本文中我們將描述適用于不同平臺(tái)的人臉動(dòng)畫系統(tǒng)的不同原型。然后,演示如何將情緒和表情納入并運(yùn)用到面部動(dòng)畫系統(tǒng)中。</p><p><b> 關(guān)鍵詞</b></p><p> 人臉動(dòng)畫,參數(shù)化,個(gè)性化,情感,MPEG4</p><p><b> 引言<
56、/b></p><p> 一篇文獻(xiàn)綜述就人臉動(dòng)畫系統(tǒng)的不同方法進(jìn)行了研究,研究表明理想的人臉動(dòng)畫系統(tǒng)有以下幾個(gè)特征:</p><p> 方便使用:人臉動(dòng)畫系統(tǒng)要方便使用,操作簡(jiǎn)單。也就是說:</p><p> 可以與任何臉部模特進(jìn)行合作(男性、女性、兒童、卡通等)</p><p> 建立動(dòng)畫模型耗時(shí)最短</p>&
57、lt;p> 動(dòng)畫家可以進(jìn)行自由創(chuàng)作,必要時(shí)可以作出特定變形</p><p><b> 可以取得實(shí)效</b></p><p> 可以精確地控制動(dòng)畫集成:使用系統(tǒng)要簡(jiǎn)單、快捷,可以在任何環(huán)境下工作(個(gè)人電腦、網(wǎng)站、移動(dòng)……)。通用性:利用新模型有很大的優(yōu)勢(shì),它可以重新開始做之前做的工作(變形數(shù)據(jù)或動(dòng)畫),還可以減少了開發(fā)新動(dòng)畫或應(yīng)用所需的資源。</p&g
58、t;<p> 視覺效果:最后做出來的動(dòng)畫應(yīng)該看起來像卡通模型或克隆的那樣逼真。同時(shí)在設(shè)計(jì)過程中就應(yīng)考慮到視覺效果。</p><p> 要想最大程度的實(shí)現(xiàn)這些目標(biāo),就在模型中使用哪些參數(shù)做出正確的決策就顯得至關(guān)重要了。保羅·艾克曼[5]提出一個(gè)參數(shù)化系統(tǒng),進(jìn)而開始定義一個(gè)用于臉部合成的標(biāo)準(zhǔn)化系統(tǒng)。在1999年,MPEG-4通過臉部動(dòng)畫參數(shù)設(shè)定了一個(gè)有趣的標(biāo)準(zhǔn)。該標(biāo)準(zhǔn)提出通過直接操縱臉部特
59、征點(diǎn)來變形臉部模型,同時(shí)提出了一種新穎的動(dòng)畫結(jié)構(gòu),特別是針對(duì)聯(lián)網(wǎng)應(yīng)用優(yōu)化的動(dòng)畫結(jié)構(gòu)。這些參數(shù)是完全獨(dú)立于模型的,其基于少量的信息,根據(jù)所使用的面部引擎為每個(gè)臉部模型動(dòng)畫作出調(diào)整。為了開發(fā)基于該參數(shù)化系統(tǒng)的人臉動(dòng)畫引擎,人們已經(jīng)做了大量的研究工作。通常來講,每個(gè)動(dòng)畫參數(shù)的分段線性插值函數(shù)[9,15]的應(yīng)用就是為了達(dá)到人們期望得到的臉部變形快速設(shè)計(jì)的效果(參見第2.1節(jié))。</p><p><b> MP
60、EG-4概述</b></p><p> 為了理解基于MPEG-4參數(shù)系統(tǒng)的人臉動(dòng)畫,我們應(yīng)該說明動(dòng)畫標(biāo)準(zhǔn)和流線的一些關(guān)鍵信息,以便動(dòng)畫設(shè)計(jì)出線條柔和的人臉模型。</p><p> FAPU(人臉動(dòng)畫參數(shù)單位):人臉動(dòng)畫參數(shù)單位描繪了所有的動(dòng)畫參數(shù)單位。該參數(shù)單元是基于人臉模型和人臉關(guān)鍵點(diǎn)(比如眼距或嘴的大?。┻M(jìn)行計(jì)算的。</p><p> FDP(
61、臉部定義參數(shù)):這一縮寫(FDP)描繪了88個(gè)面部模型特征點(diǎn)的集合。人臉動(dòng)畫參數(shù)單位和面部動(dòng)畫參數(shù)都基于這些特征點(diǎn)。這些特征點(diǎn)還可用于根據(jù)特定特征進(jìn)行面部模型的變形。</p><p> FAP(人臉動(dòng)畫參數(shù)):人臉動(dòng)畫參數(shù)是一組對(duì)高水平和低水平參數(shù)進(jìn)行分解的值,其代表特定特征點(diǎn)(臉部定義參數(shù))根據(jù)特定方向的位移。兩個(gè)特殊值(人臉動(dòng)畫參數(shù)1和2)通常用于表示視位和表情。所有66個(gè)低水平人臉動(dòng)畫參數(shù)值用于表示根據(jù)特
62、定方向的臉部定義參數(shù)的位移(見圖1)。由這些位移引起的所有變形組合形成最終的表情。人臉動(dòng)畫是隨著時(shí)間推移出現(xiàn)的表情變化。</p><p> MPEG-4人臉動(dòng)畫的另一個(gè)方面,如人臉動(dòng)畫參數(shù)插值表,可以應(yīng)用到簡(jiǎn)化表示表情或動(dòng)畫所需的數(shù)據(jù)數(shù)量。采用這一方法可以由少部分參數(shù)來表示動(dòng)畫,這是構(gòu)建網(wǎng)絡(luò)應(yīng)用程序的有效方法(每幀小于2 kb的)。</p><p><b> 動(dòng)畫設(shè)計(jì)<
63、/b></p><p> 可以使用多種方法來制作實(shí)時(shí)臉部表情動(dòng)畫流,而這也或多或少取決于實(shí)際應(yīng)用的程序。在這一節(jié)中,我們將簡(jiǎn)要介紹其中的一些方法。</p><p><b> 從文本到視覺的方法</b></p><p> 從書面文本開始制作,我們使用文本到語(yǔ)音引擎來產(chǎn)生音素?cái)?shù)據(jù)和音頻。為了定義視位和表情,我們應(yīng)用由Kshirsagar
64、 et al.r[10]所提出的主成分(PCS)方法。通過臉部運(yùn)動(dòng)數(shù)據(jù)的統(tǒng)計(jì)分析得出主要成分,主要成分顯示出進(jìn)行流利講話時(shí)觀察到的獨(dú)立臉部運(yùn)動(dòng)。在視覺前端中主要包含步驟如下:</p><p> 從文本中生成人臉動(dòng)畫參數(shù)</p><p> 表情組合:每個(gè)表情都與強(qiáng)度值相關(guān)聯(lián),并與先前計(jì)算的共同發(fā)音音素軌跡相結(jié)合。</p><p> 周期性臉部運(yùn)動(dòng):動(dòng)畫臉部表情出
65、現(xiàn)周期性眨眼和輕微的頭部運(yùn)動(dòng)可以讓人感覺更加逼真。</p><p><b> 光學(xué)捕捉</b></p><p> 為了捕捉臉部特征點(diǎn)更真實(shí)的運(yùn)動(dòng)軌跡,我們使用商業(yè)光學(xué)跟蹤系統(tǒng)來捕捉臉部運(yùn)動(dòng),用6個(gè)攝像機(jī)和27個(gè)對(duì)應(yīng)的MPEG-4臉部定義參數(shù)標(biāo)記。在參數(shù)化系統(tǒng)中,總共有40個(gè)臉部定義參數(shù)是可動(dòng)畫的,但是由于難以在舌和嘴唇上建立標(biāo)記,所以我們使用了27子集。獲得了作為
66、跟蹤系統(tǒng)輸出的每個(gè)標(biāo)記點(diǎn)的3D軌跡,其同樣適用于2D動(dòng)畫。在數(shù)據(jù)捕獲期間,頭部運(yùn)動(dòng)不受限制,因此需要之后修改補(bǔ)正來獲得標(biāo)記臉部的局部變形[2 ]。</p><p> 我們提取頭部運(yùn)動(dòng),從所有特征點(diǎn)標(biāo)記的運(yùn)動(dòng)軌跡中去除全局運(yùn)動(dòng)分量,從而導(dǎo)致絕對(duì)局部位移。然后通過這些位移就很容易計(jì)算MPEG-4 人臉動(dòng)畫參數(shù)值。例如,人臉動(dòng)畫參數(shù)值3張開的下巴被定義為臉部定義參數(shù)的歸一化位移。</p><p&g
67、t; 2.1 從中立位置對(duì)人臉動(dòng)畫參數(shù)單位,鏡像神經(jīng)元系統(tǒng),進(jìn)行縮放。由于人臉動(dòng)畫參數(shù)值被定義為從中立位置的特征點(diǎn)的歸一化位移,因此通過計(jì)算中立位置和從該位置進(jìn)行的位移[11]來計(jì)算人臉動(dòng)畫參數(shù)值的做法微不足道。該算法是基于通用特征點(diǎn)的網(wǎng)格變形,是通過使用MPEG-4面部特征點(diǎn)來擴(kuò)展面部網(wǎng)格。</p><h4> 3.3 動(dòng)畫的自動(dòng)個(gè)性化</h2><p> 如上所述,可以使用光學(xué)跟
68、蹤系統(tǒng)來捕捉放置在臉部標(biāo)記的空間運(yùn)動(dòng)。獲得的信息在MPEG-4人臉動(dòng)畫參數(shù)中轉(zhuǎn)換,可以由任何符合MPEG4的面部動(dòng)畫引擎來解讀。但是在運(yùn)動(dòng)捕捉數(shù)據(jù)轉(zhuǎn)換成人臉動(dòng)畫參數(shù)值的過程中,由于人臉動(dòng)畫參數(shù)值的限制,我們會(huì)丟失了一些信息(見第2.1節(jié))。換言之,在標(biāo)準(zhǔn)格式下,我們不能完全恢復(fù)與運(yùn)動(dòng)捕獲的相同的位移。未存儲(chǔ)的運(yùn)動(dòng)方向上的位移將丟失。我們將提出一個(gè)解決方案,以在合成過程中恢復(fù)這一丟失的信息。</p><p> 如
69、上所述,我們的系統(tǒng)能夠根據(jù)臉部定義參數(shù)的位置對(duì)臉部模型進(jìn)行變形。這種變形通常是用人臉動(dòng)畫參數(shù)單位單元計(jì)算出的人臉動(dòng)畫參數(shù)值。該系統(tǒng)不僅可以移動(dòng)那些人臉動(dòng)畫參數(shù)值的點(diǎn),而且可以移動(dòng)任何臉部定義參數(shù)點(diǎn)。換言之,我們簡(jiǎn)單地從控制點(diǎn)的空間位置來計(jì)算變形。因此,我們能夠根據(jù)人臉動(dòng)畫參數(shù)值或隨機(jī)的臉部定義參數(shù)的位置來進(jìn)行臉部模型變形。然后,我們建議應(yīng)用彈性質(zhì)點(diǎn)網(wǎng)絡(luò),以重新校準(zhǔn)控制點(diǎn)的空間位置。我們不把這些點(diǎn)線性連接起來,因?yàn)榘褬?biāo)記點(diǎn)線性連接起來的話
70、動(dòng)畫就不會(huì)那么動(dòng)態(tài)。</p><p> 這種方法還有另一個(gè)有趣的想法是,可能可以根據(jù)特定的彈簧連接的線彈性質(zhì)點(diǎn)參數(shù)進(jìn)行個(gè)性化定制。最終目標(biāo)是能夠自動(dòng)設(shè)置這些彈性質(zhì)點(diǎn)參數(shù),并使用這些參數(shù),再現(xiàn)具有相同數(shù)量的臉部動(dòng)畫參數(shù)從而做出更逼真的動(dòng)畫。這項(xiàng)研究還在實(shí)驗(yàn)中。</p><p><b> 參考文獻(xiàn)</b></p><p> Arnold, M
71、.B. (1960). Emotion and personality. Columbia University Press, New York.</p><p> Blostein, S.; Huang, T. (1988). Motion Understanding Robot and Human Vision. Kluwer Academic Publishers, pp. 329-352.</p&
72、gt;<p> Cornelius, R.R. (1996). The science of emotion. Research and tradition in the psychology of emotion. Prentice-Hall, Upper Saddle River (NJ).</p><p> Egges, A.; Kshirsagar, S.; Magnenat-Thalm
73、ann, N. (2004). Generic personality and emotion simulation for conversational agents. Computer Animation and Virtual Worlds, 15(1):1-13.</p><p> Ekman, P.; Friesen, W.V. (1978). Facial Action Coding System
74、: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, California.</p><p> Ekman, P. (1982). Emotion in the human face. Cambridge University Press, New York.</p&
75、gt;<p> Elliott, C.D. (1992). The Affective Reasoner: A process model of emotions in a multiagent system. PhD thesis, Northwestern University.</p><p> Garchery, S.; Magnenat-Thalmann, N. (2001). Des
76、igning mpeg-4 facial animation tables for web applications. In Multimedia Modeling 2001, Amsterdam, pages 39-59.</p><p> Kim, J.W.; Song, M.; Kim, I.J; Kwon, Y.M; Kim, H.G.; Ahn, S.C. (2000). Automatic fdp/
77、fap generation from an image sequence. In ISCAS 2000 - IEEE International Symposium on Circuits and Systems.</p><p> S. Kshirsagar, T. Molet, N. Magnenat-Thalmann (2001). Principal components of expressive
78、speech animation. In Proceedings Computer Graphics International, pages 59- 69.</p><p> Kshirsagar, S.; Garchery, S.; Magnenat-Thalmann N. (2001). Feature Point Based Mesh Deformation Applied to MPEG-4 Faci
79、al Animation. Kluwer Academic Publishers, pp. 33-43.</p><p> Magnenat-Thalmann, N.; Thalmann D. (2004), Handbook of Virtual Human, Eds Wiley & Sons, Ltd., publisher, ISBN: 0-470-02316-3.</p><
80、p> Noh, J.Y.; Fidaleo, D.; Neumann U. (2000). Animated deformations with radial basis functions. In VRST, pages 166-174.</p><p> Ortony, A.; Clore, G.L.; Collins A. (1988). The Cognitive Structure of Em
81、otions. Cambridge University Press.</p><p> Ostermann, J. (1998). Animation of synthetic faces in mpeg-4. In Computer Animation. Held in Philadelphia, Pennsylvania, USA.</p><p> Pasquariello,
82、S. ; Pelachaud C. (2001). Greta: A simple facial animation engine. In 6th Online World Conference on Soft Computing in Industrial Appications, Session on Soft Computing for Intelligent 3D Agents.</p><p> Pl
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫(kù)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 【中英雙語(yǔ)】124中英文雙語(yǔ)計(jì)算機(jī)專業(yè)畢業(yè)設(shè)計(jì)外文文獻(xiàn)翻譯成品:django框架介紹(最新)
- 125中英文雙語(yǔ)畢業(yè)設(shè)計(jì)外文文獻(xiàn)翻譯成品 在空調(diào)室內(nèi)進(jìn)行氣流分析
- 101中英文雙語(yǔ)珠寶首飾設(shè)計(jì)專業(yè)畢業(yè)設(shè)計(jì)外文文獻(xiàn)翻譯成品現(xiàn)代首飾的形式和材料
- 【中英雙語(yǔ)】76中英文雙語(yǔ)工程管理專業(yè)畢業(yè)設(shè)計(jì)外文文獻(xiàn)翻譯成品:建筑工程綠色施工管理研究
- 16中英文雙語(yǔ)外文文獻(xiàn)翻譯成品腹板開洞簡(jiǎn)支混合梁設(shè)計(jì)
- 20中英文雙語(yǔ)廣告設(shè)計(jì)專業(yè)外文文獻(xiàn)翻譯成品廣告設(shè)計(jì)中“形式、內(nèi)涵、意義”的藝術(shù)意象
- 【中英雙語(yǔ)】98中英文雙語(yǔ)畢業(yè)設(shè)計(jì)外文文獻(xiàn)翻譯成品:河南省現(xiàn)代農(nóng)業(yè)發(fā)展的問題與對(duì)策
- 62亞太平面設(shè)計(jì)傳統(tǒng)與未來 中英文雙語(yǔ)視覺傳達(dá)平面圖形設(shè)計(jì)專業(yè)畢業(yè)設(shè)計(jì)外文文獻(xiàn)翻譯成品
- 14中英文雙語(yǔ)外文文獻(xiàn)翻譯成品菲律賓家政傭工及其能力開發(fā)
- 113中英文雙語(yǔ)有關(guān)企業(yè)公司財(cái)務(wù)風(fēng)險(xiǎn)管理的畢業(yè)設(shè)計(jì)外文文獻(xiàn)翻譯成品財(cái)務(wù)風(fēng)險(xiǎn)管理簡(jiǎn)介
- 25中英文雙語(yǔ)外文文獻(xiàn)翻譯成品基于人臉識(shí)別技術(shù)的駕駛員疲勞監(jiān)測(cè)系統(tǒng)
- 63中英文雙語(yǔ)大學(xué)畢業(yè)設(shè)計(jì)外文文獻(xiàn)翻譯成品ambikraf屏風(fēng)將技術(shù)與傳統(tǒng)工藝進(jìn)行融合
- 41中英文雙語(yǔ)外文文獻(xiàn)翻譯成品私募股權(quán)母基金的風(fēng)險(xiǎn)狀況
- 高校大學(xué)宿舍管理系統(tǒng)研究 中英文雙語(yǔ)計(jì)算機(jī)專業(yè)外文文獻(xiàn)翻譯成品
- 39中英文雙語(yǔ)畢業(yè)設(shè)計(jì)外文文獻(xiàn)翻譯成品球墨鑄鐵摩擦焊接過程中的微組織與質(zhì)量傳輸
- 60中英文雙語(yǔ)外文文獻(xiàn)翻譯成品; 二維進(jìn)氣歧管流模擬
- 68中英文雙語(yǔ)財(cái)務(wù)管理會(huì)計(jì)專業(yè)畢業(yè)設(shè)計(jì)外文文獻(xiàn)翻譯成品金融財(cái)務(wù)欺詐風(fēng)險(xiǎn)管理和公司治理
- 【中英雙語(yǔ)】172關(guān)于有關(guān)的外文文獻(xiàn)翻譯成品:基于人機(jī)交互界面的產(chǎn)品設(shè)計(jì)(中英文雙語(yǔ)對(duì)照)
- 07中英文雙語(yǔ)畢業(yè)設(shè)計(jì)外文文獻(xiàn)翻譯成品澳大利亞上市公司治理行為自愿性信息披露
- 01中英文雙語(yǔ)對(duì)照畢業(yè)設(shè)計(jì)外文文獻(xiàn)翻譯成品《哈利波特與阿茲卡班的囚徒》中語(yǔ)法隱喻類型
評(píng)論
0/150
提交評(píng)論