Hands-Free Speech Interaction With Artificial Companions.

Abstract :

The automatic recognition of speech for human-machine interaction is out of doubt one of the most interesting interfaces to develop. This work aims to provide an expanded vision of what has been experimented, the current challenges and the planned research towards the “perfect” dialogue environment of an Ambient Assisted Living structure. The FP7 CompanionAble project offers the context and support of this research. Either a Smart Home and a Robot Companion environment are involved in the automatic speech recognition behaviors. The target speakers are mainly elderly persons living alone, considered care recipients. Maximum a Posteriori adaptation to the speaker and channel increases word recognition with 1-best hypothesis up to 87.34% for elderly. Maximum Likelihood Linear Regression (MLLR) has reached 94.44% semantically correct sentences for an adult speaker. Moreover, the integration and synchronisation with external sound modules are discussed to clarify the main aspects of the interactive architecture.

Type de document :
Communication dans un congrès
5th Companion Robotics Institute Workshop, 2011, Brussels, Belgium
Liste complète des métadonnées

Contributeur : Admin Télécom Paristech <>
Soumis le : vendredi 19 août 2011 - 11:54:57
Dernière modification le : jeudi 11 janvier 2018 - 06:23:38


  • HAL Id : hal-00615464, version 1


Daniel R.S. Caon, Jérôme Boudy, Gérard Chollet. Hands-Free Speech Interaction With Artificial Companions.. 5th Companion Robotics Institute Workshop, 2011, Brussels, Belgium. 〈hal-00615464〉



Consultations de la notice