A. First, F. Expression, U. Body-animation, and .. , 65 4.1.2 Motion synthesis Head and Upper Face Synthesis Module, p.72

L. Dataset-of-body-motion and .. , 83 4.2.2 Head and Torso Motion Synthesis 84 4.2.3 Experiments, p.93

Y. Ding, M. Radenen, T. Artières, and C. Pelachaud, Eyebrow Motion Synthesis Driven by Speech WACAI 2012 Workshop Affect

3. Game and E. Design, A Practical Approach to Real-time Computer Graphics, 2000.

V. Adelsward, Laughter and Dialogue: The Social Significance of Laughter in Institutional Discourse, Nordic Journal of Linguistics, vol.23, issue.02, pp.107-136, 1989.
DOI : 10.1177/001872675901200205

M. S. Moataz-el-ayadi, F. Kamel, and . Karray, Survey on speech emotion recognition: Features, classification schemes, and databases, Pattern Recognition, vol.44, issue.3, pp.572-587, 2011.
DOI : 10.1016/j.patcog.2010.09.020

G. Bailly, Audiovisual speech synthesis, International Journal of Speech Technology, vol.6, pp.6-331, 2001.
URL : https://hal.archives-ouvertes.fr/hal-00169556

R. Banse and K. R. Scherer, Acoustic profiles in vocal emotion expression., Journal of Personality and Social Psychology, vol.70, issue.3, pp.614-636, 1996.
DOI : 10.1037/0022-3514.70.3.614

J. Beskow, Rule-based visual speech synthesis, ESCA -EUROSPEECH '95. 4th European Conference on Speech Communication and Technology, 1995.

E. Bevacqua, K. Prepin, R. Niewiadomski, E. De-sevin, and C. Pelachaud, GRETA, Artificial Companions in Society: perspectives on the Present and Future, pp.1-17, 2010.
DOI : 10.1075/nlp.8.20bev

P. Boersma and D. Weeninck, Praat, a system for doing phonetics by computer, Glot International, vol.5, issue.910, pp.341-345, 2001.

D. L. Bolinger, Intonation and Its Uses: Melody in Grammar and Discourse, 1989.

H. Boukricha, I. Wachsmuth, A. Hofstätter, and K. Grammer, Pleasure-arousal-dominance driven facial expression simulation, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, pp.119-125, 2009.
DOI : 10.1109/ACII.2009.5349579

D. Bradley, W. Heidrich, T. Popa, and A. Sheffer, High resolution passive facial performance capture, Proc. SIGGRAPH), 2010.

M. Brand, Voice puppetry, Proceedings of the 26th annual conference on Computer graphics and interactive techniques , SIGGRAPH '99, pp.21-28, 1999.
DOI : 10.1145/311535.311537

M. Brand, Coupled hidden markov models for modeling interacting processes, 1997.

C. Bregler, M. Covell, and M. Slaney, Video Rewrite, Proceedings of the 24th annual conference on Computer graphics and interactive techniques , SIGGRAPH '97, pp.353-360, 1997.
DOI : 10.1145/258734.258880

T. S. Buchanan, D. G. Lloyd, K. Manal, and T. F. Besier, Neuromusculoskeletal Modeling: Estimation of Muscle Forces and Joint Moments and Movements from Measurements of Neural Command, Journal of Applied Biomechanics, vol.20, issue.4, pp.367-395, 2004.
DOI : 10.1123/jab.20.4.367

C. Busso, Z. Deng, M. Grimm, U. Neumann, and S. Narayanan, Rigid Head Motion in Expressive Speech Animation: Analysis and Synthesis, IEEE Transactions on Audio, Speech and Language Processing, vol.15, issue.3, pp.1075-1086, 2007.
DOI : 10.1109/TASL.2006.885910

C. Busso, Z. Deng, U. Neumann, and S. Narayanan, Natural head motion synthesis driven by acoustic prosodic features, Computer Animation and Virtual Worlds, vol.25, issue.3-4, pp.3-4283, 2005.
DOI : 10.1002/cav.80

Z. Callejas, B. Ravenet, M. Ochs, and C. Pelachaud, A computational model of social attitudes for a virtual recruiter International Foundation for Autonomous Agents and Multiagent Systems, Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems, AAMAS '14, pp.93-100

B. De-carolis, C. Pelachaud, I. Poggi, and M. Steedman, APML, a Markup Language for Believable Behavior Generation, Lifelike Characters. Tools, Affective Functions and Applications, 2004.
DOI : 10.1007/978-3-662-08373-4_4

J. Cassell, C. Pelachaud, N. I. Badler, M. Steedman, B. Achorn et al., Animated conversation, Proceedings of the 21st annual conference on Computer graphics and interactive techniques , SIGGRAPH '94, pp.413-420, 1994.
DOI : 10.1145/192161.192272

J. Cassell, H. Vilhjálmsson, and T. Bickmore, BEAT, Proceedings of the 28th annual conference on Computer graphics and interactive techniques , SIGGRAPH '01, 2001.
DOI : 10.1145/383259.383315

C. Cave, R. Guaitella, S. Bertrand, F. Santi, R. Harlay et al., About the relationship between eyebrow movements and Fo variations, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96, pp.2175-2179, 1996.
DOI : 10.1109/ICSLP.1996.607235

H. Çakmak, J. Urbain, J. Tilmanne, and T. Dutoit, Evaluation of HMM-based visual laughter synthesis, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014.
DOI : 10.1109/ICASSP.2014.6854469

C. Chiu and S. Marsella, How to Train Your Avatar: A Data Driven Approach to Gesture Generation, Proceedings of the 10th International Conference on Intelligent Virtual Agents, pp.127-140, 2011.
DOI : 10.1007/978-3-642-23974-8_14

C. Chiu and S. Marsella, Gesture generation with lowdimensional embeddings International Foundation for Autonomous Agents and Multiagent Systems, Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems, pp.781-788, 2014.

C. Clavel, J. Plessier, J. Martin, L. Ach, and B. Morel, Combining Facial and Postural Expressions of Emotions in a Virtual Character, Intelligent Virtual Agents, pp.287-300, 2009.
DOI : 10.1007/978-3-642-04380-2_31

M. Michael, D. W. Cohen, and . Massaro, Modeling coarticulation in synthetic visual speech, Models and Techniques in Computer Animation, pp.139-156, 1993.

D. Cosker and J. Edge, Laughing, crying, sneezing and yawning: Automatic voice driven animation of non-speech articulations, Proceedings of Computer Animation and Social Agents, pp.21-24, 2009.

M. Costa, T. Chen, and F. Lavagetto, Visual prosody analysis for realistic motion synthesis of 3d head models, In: Proc. of ICAV3D01 - International Conference on Augmented, Virtual Environments and 3D Imaging, pp.343-346, 2001.

M. Courgeon, C. Clavel, and J. Martin, Appraising emotional events during a real-time interactive game, Proceedings of the International Workshop on Affective-Aware Virtual Agents and Social Robots, AFFINE '09, pp.1-7, 2009.
DOI : 10.1145/1655260.1655267

R. Cowie and E. Douglas-cowie, Speakers and hearers are people: reflections on speech deterioration as a consequence of acquired deafness, Whurr, pp.510-527, 1995.

C. Darwin, The expression of the emotions in man and animals, p.1872

K. Iwan-de and D. Heylen, When do we smile? analysis and modeling of the nonverbal context of listener smiles in conversation, Affective Computing and Intelligent Interaction, pp.477-486, 2011.

M. Celso, J. De-melo, and . Gratch, Expression of emotions using wrinkles , blushing, sweating and tears, Proceedings of the 9th International Conference on Intelligent Virtual Agents, 2009.

M. Celso, P. G. De-melo, J. Kenny, and . Gratch, Real-time expression of affect through respiration, JVCA, vol.21, issue.3-4, pp.225-234, 2010.

L. Deng, A generalized hidden Markov model with state-conditioned trend functions of time for the speech signal, Signal Processing, vol.27, issue.1, pp.65-78, 1992.
DOI : 10.1016/0165-1684(92)90112-A

Z. Deng, J. P. Lewis, and U. Neumann, Synthesizing speech animation by learning compact speech co-articulation models, Computer Graphics International 2005, pp.19-25, 2005.

C. Paul, V. B. Dilorenzo, B. L. Zordan, and . Sanders, Laughing out loud: control for modeling anatomically inspired laughter using audio, ACM Trans. Graph, vol.27, issue.5, p.125, 2008.

Y. Ding, C. Pelachaud, and T. Artières, Modeling Multimodal Behaviors from Speech Prosody, IVA, pp.217-228
DOI : 10.1007/978-3-642-40415-3_19

Y. Ding, T. Mathieu-radenen, C. Artières, and . Pelachaud, Speech-driven eyebrow motion synthesis with contextual Markovian models, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp.3756-3760, 2013.
DOI : 10.1109/ICASSP.2013.6638360

URL : https://hal.archives-ouvertes.fr/hal-01215185

P. Ekman, About brows: Emotional and conversational signals Human ethology: Claims and limits of a new discipline: contributions to the Colloquium, pp.169-248, 1979.

P. Ekman, Emotions Revealed New York: Times Books (US), 2003.

P. Ekman and W. Friesen, Facial Action Coding System: A Technique for the Measurement of Facial Movement, 1978.

P. Ekman and W. Friesen, Felt, false, and miserable smiles, Journal of Nonverbal Behavior, vol.6, issue.4, pp.238-251, 1982.
DOI : 10.1007/BF00987191

P. Ekman, W. V. Friesen, and J. C. Hager, Facial Action Coding System (FACS): Manual. A Human Face, 2002.

T. Ezzat, G. Geiger, and T. Poggio, Trainable videorealistic speech animation, Automatic Face and Gesture Recognition Proceedings. Sixth IEEE International Conference on, pp.57-64, 2004.

G. Fanelli, J. Gall, H. Romsdorfer, T. Weise, and L. Van-gool, A 3-D Audio-Visual Corpus of Affective Communication, IEEE Transactions on Multimedia, vol.12, issue.6, pp.591-598, 2010.
DOI : 10.1109/TMM.2010.2052239

L. María and . Flecha-garcía, Eyebrow raises in dialogue and their relation to discourse structure, utterance function and pitch accents in english, Speech Communication, vol.52, issue.6, pp.542-554, 2010.

N. Fourati and C. Pelachaud, Emilya: Emotional body expression in daily actions database, LREC 2014, the 9th International Conference on Language Resources and Evaluation, pp.3486-3493, 2014.

H. Peter-graf, E. Cosatto, V. Strom, and F. Huang, Visual prosody: Facial movements accompanying speech, Proceedings of AFGR 2002, pp.381-386, 2002.

B. Granstrom and D. House, Audiovisual representation of prosody in expressive speech communication, Speech Communication, vol.46, issue.3-4, pp.393-400, 2004.
DOI : 10.1016/j.specom.2005.02.017

H. J. Griffin, M. S. Aung, B. Romera-paredes, C. Mcloughlin, G. Mckeown et al., Laughter Type Recognition from Whole Body Motion, 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, pp.349-355, 2013.
DOI : 10.1109/ACII.2013.64

D. Heylen, E. Bevacqua, M. Tellier, and C. Pelachaud, Searching for Prototypical Facial Feedback Signals, IVA07, pp.147-153, 2007.
DOI : 10.1007/978-3-540-74997-4_14

URL : https://hal.archives-ouvertes.fr/hal-00433316

G. Hofer and H. Shimodaira, Automatic head motion prediction from speech data, Proc. Interspeech, 2007.

G. Hofer, J. Yamagishi, and H. Shimodaira, Speech-driven lip motion generation with a trajectory hmm, Proc. Interspeech, pp.2314-2317, 2008.

T. Huber and W. Ruch, Laughter as a uniform category? A historic analysis of different types of laughter, 10th Congress of the Swiss Society of Psychology, 2007.

D. Keltner, Signs of appeasement: Evidence for the distinct displays of embarrassment, amusement, and shame., Journal of Personality and Social Psychology, vol.68, issue.3, pp.441-454, 1995.
DOI : 10.1037/0022-3514.68.3.441

A. Kendon and H. R. Key, Gesticulation and Speech: Two Aspects of the Process of Utterance, The Relationship of Verbal and Nonverbal Communication, pp.207-227, 1980.
DOI : 10.1515/9783110813098.207

S. Kopp, B. Krenn, S. Marsella, A. N. Marshall, and C. Pelachaud, Hannes Pirker, Kristinn R. ThøsrissonThøsrisson, and Hannes Vilhjà ? almsson . Towards a common framework for multimodal generation: The behavior markup language, INTERNATIONAL CONFERENCE ON INTELLI- GENT VIRTUAL AGENTS, pp.21-23, 2006.

S. Kshirsagar and N. Magnenat-thalmann, Visyllable Based Speech Animation, Computer Graphics Forum, vol.31, issue.2, pp.632-640, 2003.
DOI : 10.1016/S0364-0213(99)80001-9

T. Kuratate, K. G. Munhall, and P. Rubin, Eric Vatikiotis-Bateson, and Hani Yehia. Audio-visual synthesis of talking faces from speech production correlates, EUROSPEECH, pp.1279-1282, 1999.

T. Kuratate, K. G. Munhall, and P. Rubin, Eric Vatikiotis-Bateson, and Hani Yehia. Audio-visual synthesis of talking faces from speech production correlates, EUROSPEECH, 1999.

J. Lafferty, A. Mccallum, and F. Pereira, Conditional random fields: Probabilistic models for segmenting and labeling sequence data, ICML, pp.282-289, 2001.

B. H. Le, X. Ma, and Z. Deng, Live Speech Driven Head-and-Eye Motion Generators, IEEE Transactions on Visualization and Computer Graphics, vol.18, issue.11, pp.1902-1914, 2012.
DOI : 10.1109/TVCG.2012.74

C. Lee, E. Mower, C. Busso, S. Lee, and S. Narayanan, Emotion recognition using a hierarchical binary decision tree approach, Speech Communication, vol.53, issue.9-10, pp.1162-1171, 2011.
DOI : 10.1016/j.specom.2011.06.004

J. Lee and S. Marsella, Modeling Speaker Behavior: A Comparison of Two Approaches, International Conference on Intelligent Virtual Agents, pp.161-174, 2012.
DOI : 10.1007/978-3-642-33197-8_17

J. Lee and S. C. Marsella, Predicting speaker head nods and the effects of affective information. Multimedia, IEEE Transactions on, vol.12, issue.6, pp.552-562, 2010.

S. Levine, C. Theobalt, and V. Koltun, Real-time prosodydriven synthesis of body language, In ACM Transactions on Graphics, vol.28, p.172, 2009.

S. Levine, P. Krähenbühl, S. Thrun, and V. Koltun, Gesture controllers, ACM Transactions on Graphics (TOG), vol.29, issue.4, p.124, 2010.

H. Li, J. Yu, Y. Ye, and C. Bregler, Realtime facial animation with on-the-fly correctives, ACM Transactions on Graphics, vol.32, issue.4, 2013.
DOI : 10.1145/2461912.2462019

Y. Li, . Heung, and . Shum, Learning dynamic audio-visual mapping with inputoutput hidden markov models, IEEE Trans. on Multimedia, pp.542-549, 2006.

W. Liu, B. Yin, X. Jia, and D. Kong, Audio to visual signal mappings with hmm, IEEE International Conference on Acoustics, Speech, and Signal Processing, 2004.

E. S. Luschei, L. O. Ramig, E. M. Finnegan, K. K. Baker, and M. E. Smith, Patterns of Laryngeal Electromyography and the Activity of the Respiratory System During Spontaneous Laughter, Journal of Neurophysiology, vol.96, issue.1, pp.442-450, 2006.
DOI : 10.1152/jn.00102.2006

X. Ma, Z. Binh-huy-le, and . Deng, Perceptual analysis of talking avatar head movements, Proceedings of the 2011 annual conference on Human factors in computing systems, CHI '11, pp.2699-2702, 2011.
DOI : 10.1145/1978942.1979337

M. Mancini, G. Varni, D. Glowinski, and G. Volpe, Computing and Evaluating the Body Laughter Index, Proceedings of HBU, pp.90-98, 2012.
DOI : 10.1007/978-3-642-34014-7_8

S. Mariooryad and C. Busso, Generating Human-Like Behaviors Using Joint, Speech-Driven Models for Conversational Agents, IEEE Transactions on Audio, Speech, and Language Processing, vol.20, issue.8, pp.2329-2340, 2012.
DOI : 10.1109/TASL.2012.2201476

S. Marsella, Y. Xu, M. Lhommet, A. W. Feng, S. Scherer et al., Virtual character performance from speech, Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA '13, pp.25-35, 2013.
DOI : 10.1145/2485895.2485900

G. Mckeown, W. Curran, C. Mcloughlin, H. J. Griffin, and N. Bianchi-berthouze, Laughter induction techniques suitable for generating motion capture data of laughter associated body movements, 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp.1-5, 2013.
DOI : 10.1109/FG.2013.6553806

D. Mcneill, Hearing lips and seeing voices, Nature, vol.264, pp.246-248, 1976.

D. Mcneill, Hand and Mind, 1992.
DOI : 10.1515/9783110874259.351

A. H. Mines, Respiratory physiology, 1993.

K. G. Munhall, J. A. Jones, D. E. Callan, T. Kuratate, and E. Vatikiotis-bateson, Visual Prosody and Speech Intelligibility: Head Movement Improves Auditory Speech Perception, Psychological Science, vol.11, issue.2, pp.133-137, 2004.
DOI : 10.1016/S0167-6393(98)00048-X

M. Neff and E. Fiume, Modeling tension and relaxation for computer animation, Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation , SCA '02
DOI : 10.1145/545261.545275

V. Ng-thow-hing, Anatomically-based Models for Physical and Geometric Reconstruction of Humans and Other Animals, 2001.

R. Niewiadomski and C. Pelachaud, Towards Multimodal Expression of Laughter, IVA, pp.231-244, 2012.
DOI : 10.1007/978-3-642-33197-8_24

R. Niewiadomski, J. Hofmann, J. Urbain, T. Platt, J. Wagner et al., Pietquin, and Willibald Ruch. Laugh-aware virtual agent and its impact on user amusement, AAMAS, pp.619-626, 2013.

R. Niewiadomski, M. Mancini, and T. Baur, MMLI: Multimodal Multiperson Corpus of Laughter in Interaction, proceeding of: 4th international workshop on Human Behavior Understanding, pp.184-195, 2013.
DOI : 10.1007/978-3-319-02714-2_16

M. Ochs and C. Pelachaud, Model of the perception of smiling virtual character, AAMAS, pp.87-94, 2012.

I. S. Pandzic and R. Forcheimer, MPEG4 Facial Animation -The standard, implementations and applications, 2002.

I. Frederick and . Parke, Computer generated animation of faces, ACM National Conference, pp.451-457, 1972.

S. Pasquariello and C. Pelachaud, Greta: A Simple Facial Animation Engine, Soft Computing and Industry, pp.511-525, 2002.
DOI : 10.1007/978-1-4471-0123-9_43

C. Pelachaud, N. I. Badler, and M. Steedman, Generating Facial Expressions for Speech, Cognitive Science, vol.24, issue.4, pp.1-46, 1996.
DOI : 10.1207/s15516709cog2001_1

C. Pelachaud and I. Poggi, Subtleties of facial expressions in embodied agents, The Journal of Visualization and Computer Animation, vol.2, issue.5, pp.301-312, 2002.
DOI : 10.1002/vis.299

K. Perlin, Improving noise, Proceedings of the 29th annual conference on Computer graphics and interactive techniques, SIGGRAPH '02, pp.681-682

F. Pighin, J. Hecker, D. Lischinski, R. Szeliski, and D. H. Salesin, Synthesizing realistic facial expressions from photographs, Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '98, pp.75-84, 1998.

I. Poggi, 40. Mind, hands, face, and body: A sketch of a goal and belief view of multimodal communication, 2007.
DOI : 10.1515/9783110261318.627

URL : https://hal.archives-ouvertes.fr/in2p3-01226430

R. Robert and . Provine, Laughter: A scientific investigation. Penguin books edition, 2001.

L. Rabiner, A tutorial on hidden markov models and selected applications in speech recognition, Proceedings of the IEEE, pp.257-286, 1989.

M. Radenen and T. Artières, Contextual Hidden Markov Models, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.2113-2116, 2012.
DOI : 10.1109/ICASSP.2012.6288328

URL : https://hal.archives-ouvertes.fr/hal-01273312

B. Reeves and C. Nass, The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places, 1996.

W. Ruch and P. Ekman, The Expressive Pattern of Laughter. Emotion qualia, and consciousness, pp.426-443, 2001.

W. Ruch, G. Kohler, and C. Van-thriel, Assessing the ???humorous temperament???: Construction of the facet and standard trait forms of the State-Trait-Cheerfulness-Inventory ??? STCI, Humor - International Journal of Humor Research, vol.9, issue.3-4, pp.303-339, 1996.
DOI : 10.1515/humr.1996.9.3-4.303

M. Russell, A segmental HMM for speech pattern modelling, IEEE International Conference on Acoustics Speech and Signal Processing, pp.499-502, 1993.
DOI : 10.1109/ICASSP.1993.319351

. Jasonm, S. Saragih, J. Lucey, and . Cohn, Deformable model fitting by regularized landmark mean-shift, International Journal of Computer Vision, vol.91, issue.2, pp.200-215, 2011.

M. E. Sargin, Y. Yemez, E. Erzin, and A. M. Tekalp, Analysis of Head Gesture and Prosody Patterns for Prosody-Driven Head-Gesture Animation, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.30, issue.8, pp.1330-1345, 2008.
DOI : 10.1109/TPAMI.2007.70797

M. Schroeder and J. Trouvain, The MARY text-to-speech system, 2001.

B. Schuller, G. Rigoll, and M. Lang, Hidden markov model-based speech emotion recognition, Acoustics, Speech, and Signal Processing Proceedings . (ICASSP '03). 2003 IEEE International Conference on, pp.1-4, 2003.

W. A. Sethares and T. W. Staley, Periodicity transforms, IEEE Transactions on Signal Processing, vol.47, issue.11, pp.472953-2964, 1999.
DOI : 10.1109/78.796431

K. Tokuda, T. Yoshimura, T. Masuko, T. Kobayashi, and T. Kitamura, Speech parameter generation algorithms for HMM-based speech synthesis, 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100), pp.1315-1318, 2000.
DOI : 10.1109/ICASSP.2000.861820

J. Urbain, E. Bevacqua, T. Dutoit, A. Moinet, R. Niewiadomski et al., The avlaughtercycle database, LREC, 2010.

J. Urbain, H. Çakmak, and T. Dutoit, Automatic Phonetic Transcription of Laughter and Its Application to Laughter Synthesis, 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, pp.153-158, 2013.
DOI : 10.1109/ACII.2013.32

H. Vilhjalmsson, N. Cantelmo, J. Cassell, N. E. Chafai, M. Kipp et al., The behaviour markup language: recent developments and challenges, LECTURE NOTES IN ARTIFI- CIAL INTELLIGENCE, vol.4722, pp.99-110, 2007.

H. G. Wallbott and K. Scherer, Cues and channels in emotion recognition., Journal of Personality and Social Psychology, vol.51, issue.4, 1986.
DOI : 10.1037/0022-3514.51.4.690

K. Waters and T. Levergood, An automatic lip-synchronization algorithm for synthetic faces, Proceedings of the second ACM international conference on Multimedia , MULTIMEDIA '94, pp.149-156, 1994.
DOI : 10.1145/192593.192644

T. Weise, B. Leibe, and L. Van-gool, Fast 3D Scanning with Automatic Motion Compensation, 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp.1-8, 2007.
DOI : 10.1109/CVPR.2007.383291

A. D. Wilson and A. F. Bobick, Parametric hidden Markov models for gesture recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.21, issue.9, pp.884-900, 1999.
DOI : 10.1109/34.790429

C. Wu and W. Liang, Emotion recognition of affective speech based on multiple classifiers using acoustic-prosodic information and semantic labels (Extended abstract), 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), pp.10-21, 2011.
DOI : 10.1109/ACII.2015.7344613

J. Xue, J. Borgstrom, J. Jiang, L. Bernstein, and A. Alwan, Acoustically-Driven Talking Face Synthesis using Dynamic Bayesian Networks, 2006 IEEE International Conference on Multimedia and Expo, pp.1165-1168, 2006.
DOI : 10.1109/ICME.2006.262743

H. Yehia, P. Rubin, and E. Vatikiotis-bateson, Quantitative association of vocal-tract and facial behavior, Speech Communication, vol.26, issue.1-2, pp.23-43, 1998.
DOI : 10.1016/S0167-6393(98)00048-X

F. E. Zajac, Muscle and tendon: properties, models, scaling, and application to biomechanics and motor control, Critical reviews in biomedical engineering, vol.17, issue.4, pp.359-411, 1989.

B. Victor-brian-zordan, B. Celly, P. C. Chiu, and . Dilorenzo, Breathe easy: Model and control of simulated respiration for animation, Proceedings of the 2004 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA '04, pp.29-37, 2004.