Call for papers

 

INTERSPEECH 2016 Special Session on Speech and Language Technologies for Human-Machine Conversation-based Language Education

 

In recent years, deep learning and big data have worked jointly to improve the performance significantly and the system robustness of speech recognition, dialogue management, language understanding and machine translation. In turn, speech and language research becomes even more active than before and it generates many more emerging applications. For example, via human-machine conversation, a language learner can learn the target language very naturally when on-line assessment and timely and to-the-point dialogue can be made to improve the learner’s skills and progress in learning the new language. Many human-machine interfaces have already been deployed to games for education, simulation-based training applications and intelligent tutoring systems, etc. The success of Siri by Apple, Cortana by Microsoft and Google Now, they work as intelligent personal assistants with natural speech/language interfaces, accelerates conversation-based language learning and assessment assisted by machines. We would like to use this special session as a forum to present new research and development results on interactive and assisting language learning application scenarios. We plan to focus on computer-aided, audio-visual language learning, automatic scoring and assessment, learning error detection and diagnosis, spoken dialogue for tutoring system, speech and language technologies for education, etc. We hope to put together a stimulating session can advance the state-of-the-art speech and language technologies for machine assisted, interactive language learning systems.

 

We invite you to submit original papers in any topic related, including but not limited to:

  • Non-native speech modeling, recognition and assessment 
  • Segmental and suprasegmental pronunciation error detection and diagnosis
  • Paralinguistic information handling and modeling, e.g., L1, disfluencies, ill-formed, misspoken inputs
  • Dialogue modeling in language-learning scenarios
  • Distant/noisy input speech handling
  • Multimodal interactions and dialogs that support language learning
  • Flexible language model for recognizing non-native L2 language learners
  • Database collection, labeling and distribution
  • Analysis and modeling of spontaneous speech
  • Any conversation-based learning, not limited to language learning

 

Important dates

Submission: March 23, 2016

Acceptance notification: June 10, 2016

Camera-ready paper: June 24, 2016

Conference date: September 8-12, 2016

 

Paper submission

Papers submitted to this Special Sessions have to be submitted following the same schedule and procedure as regular INTERSPEECH papers. When submitting your paper please check the corresponding box for the Special Session on "Speech and Language Technologies for Human-Machine Conversation-based Language Education" in the INTERSPEECH submission system.

 

The papers will undergo the same review process by anonymous and independent reviewers as the remaining INTERSPEECH submissions.

 

Organizers

Yao Qian, Educational Testing Service, USA

Helen Meng, The Chinese University of Hong Kong, Hong Kong SAR

Frank K. Soong, Microsoft Research, China

 

For further inquiry regarding the special session please contact yqian@ets.org