We offer one full-time PhD position or one full-time Post-doctoral position. The expected starting date of the PhD is before the end of 2015 and may 2016 for the post-doctoral position. The duration of the contract is 3 years for the PhD and 1 year for the Post-doc. Context of the PhD: The research work takes part of the european project ARIA - VALUSPA (Affective Retrieval Interface Assistants - using Virtual Agents with Linguistic Understanding, Social skills and Personalised Aspects). The project tackles the development of ECA whose function is to serve as interface for retrieval systems and to provide a more natural answer to user's requests. For example, the ECA can take the appearance of a novel character to which the user can speak about the content of a novel and about its characters. The research work will take place in the LTCI-CNRS laboratory of Telecom-ParisTech in the GRETA team (http://www.tsi.telecom-paristech.fr/mm/en/themes-2/greta-team/) and will rely on a collaboration with the Lattice-CNRS laboratory (http://www.lattice.cnrs.fr/) Keywords: Embodied Conversational Agent, Natural Language Generation, Socio-emotional Interaction Strategies, Human-Computer Interaction, Dialog Systems ************************************************************************ Application: We are looking for candidates: * with a MSc degree (for the PhD) or PhD (for the post-doc) in Computer Science or equivalent (a degree with a technical background, e.g., machine learning, signal processing, computer graphics, computational linguistics) . * with interests in the research fields of social signal processing, machine learning and human-agent interaction. * with programming skills: Java submit by email : * Curriculum Vitae. * Mail expressing your interest in the position and your profile relevance (directly in the email body). * Copy of grades of your MSc degree. * Contact of a referee and/or recommendation letter. Incomplete applications will not be processed. ************************************************************************ PhD or Post-doc - Verbal alignment strategies in human-agent interactions Embodied Conversational Agents (ECA) are virtual characters allowing the machine to dialog with humans in a natural way: using not only verbal interaction but also non-verbal interaction (facial expressions, gestures). ECA can take the role of an assistant on sales sites or of tutors in the case of Serious Games. One of the key challenges of human-agent interaction is to maintain user's engagement in interaction (Sidner et al., 2002). Several strategies can be used to foster this engagement. One of these strategies is ECA alignment on user (Pickering & Garrod, 2000). Alignment can occur at various levels: low-level alignment which consists in the imitation of body postures (Hess et al., 1999) or in the use of a vocabulary close to user's one; high-level alignment that occur at mental, emotional or cognitive levels. Alignment on user's opinion or attitude -- also known as affiliation (Stivers, 2008) -- is one example of high-level alignment. The research work follows a previous study carried out at Telecom-ParisTech concerning the analysis of spontaneous human-human conversations in order to define the relevant strategies to foster user's engagement in human-ECA interaction (Campano, 2014). It will also rely on a collaboration with the Lattice-CNRS and its expertise on the human-machine dialog and on its constraints and linguistic aspects (Landragin, 2013; Luccioni et al., 2015). It will focus on the development of ECA alignment strategies that will be centered on the verbal modality in a multimodal context (prosody, gesture). The various levels of alignment will be considered. In particular, the research work will deal with the expression of attitudes (Scherer, 2005; Martin & White, 2003) and with the vocabulary linked to their expression. Alignment on referring expressions (Landragin, 2006) -- here, the way the user and the ECA refers to the targets of the attitude -- will also be studied. The targeted alignment strategies will rely on reasoning methods and on statistical methods (Mairesse, 2010) and will be implemented on the GRETA platform. Contacts: Chloé Clavel, associate professor, GRETA team, Télécom ParisTech. Tel:+33 (0)1 45 81 72 54 E-Mail: chloe.clavel [at] telecom-paristech.fr Frédéric Landragin, chercheur CNRS, laboratoire Lattice-CNRS. Tel: +33 (0)1 58 07 66 21 E-Mail: frederic.landragin [at] ens.fr References: Campano, S., Durand, J. & Clavel, C. (2014) "Comparative analysis of verbal alignment in human-human and human-agent interactions", In Proceedings of LREC 2014. S. Campano, C. Clavel, C. Pelachaud, « I like this painting too »: when an ECA shares appreciations to engage users, http://www.aamas2015.com/en/AAMAS_2015_USB/aamas/p1649.pdf, in AAMAS 2015, Istanbul, Turkey. R. Bawden, C. Clavel, F. Landragin, Towards the generation of dialogue acts in socio-affective ECAs: a corpus-based prosodic analysis, http://dx.doi.org/10.1007/s10579-015-9312-9, Language Resources and Evaluation, Springer Netherlands, 2015 Clavel, C.; Callejas, Z., Sentiment analysis: from opinion mining to human-agent interaction, http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=7122903&tag=1, Affective Computing, IEEE Transactions on , vol.PP, no.99, pp.1,1, doi: 10.1109/TAFFC.2015.2444846 Hess, U., Philippot, P., & Blairy, S. (1999) "8. Mimicry". The Social Context of Nonverbal Behavior, 213. Hofs, Dennis, Mariët Theune, and Rieks op den Akker. "Natural interaction with a virtual guide in a virtual environment." Journal on Multimodal User Interfaces3.1-2 (2010): 141-153. Landragin, F. (2006) "Visual perception, language and gesture: A model for their understanding in multimodal dialogue systems". Signal Processing 86.12: 3578-3595. Landragin, F. (2013) Man-Machine Dialogue. Design and Challenges. Wiley & ISTE Publishing. Luccioni, A., Benotti, L. & Landragin, F. (2015) "Overspecified References: An Experiment on Lexical Acquisition in a Virtual Environment." Computers in Human Behavior 49, pp.94-101. Mairesse, F., Gašić, M., Jurčíček, F., Keizer, S., Thomson, B., Yu, K., & Young, S. (2010) "Phrase-based statistical language generation using graphical models and active learning". In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (pp. 1552-1561). Association for Computational Linguistics. Martin, J. R. & White, P. R. (2003) The language of evaluation. Palgrave Macmillan. Ochs, M., Ding, Y., Fourati, N., Chollet, M., Ravenet, B., Pecune, F., Glas, N., Prépin, K., Clavel, C. & Pelachaud, C. (2013) "Vers des Agents Conversationnels Animés Socio-Affectifs". Interaction Humain-Machine (IHM'13), Bordeaux, France. Pickering, M. J. & Garrod, S. (2004) Toward a mechanistic psychology of dialogue. Behavioral and Brain Sciences 27.2: 169-190. Prepin, K., Ochs, M. and Pelachaud, C. (2013): Beyond backchannels: co-construction of dyadic stancce by reciprocal reinforcement of smiles between virtual agents. In proceedings of the International Conference CogSci (Annual Conference of the Cognitive Science Society), Berlin, July 2013. Scherer, K. R. (2005) "What are emotions? And how can they be measured?." Social science information 44.4: 695-729. Sidner, C. L. & Dzikovska, M. (2002). "Human-robot interaction: Engagement between humans and robots for hosting activities". In Proceedings of the 4th IEEE International Conference on Multimodal Interfaces (p. 123). IEEE Computer Society. Stivers, T. (2008) "Stance, alignment, and affiliation during storytelling: When nodding is a token of affiliation". Research on Language and social interaction 41.1: 31-57.