Joint dialog act segmentation and recognition in human conversations using attention to dialog context
-
- 河原, 達也
- Graduate School of Informatics, Kyoto University
-
- Kawahara, Tatsuya
- Graduate School of Informatics, Kyoto University
抄録
A dialog act represents the communicative function of an utterance in a conversation, and thus provides informative cues for understanding, managing, and generating dialog. While most spoken dialog systems process user input and system output at the turn level, a single turn can consist of multiple dialog acts in human conversations. Therefore, segmenting turn-level tokens into a meaningful dialog act unit is just as important as recognizing the dialog act. Towards joint segmentation and recognition of dialog acts, we propose an encoder–decoder model featuring joint coding and incorporate contextual information by means of an attentional mechanism. The proposed encoder–decoder outperforms other models in segmentation, and the application of attentions significantly reduces recognition error rates. By combining the encoder–decoder model with contextual attention, we achieve state-of-the-art performance in the joint evaluation of dialog act segmentation and recognition.
収録刊行物
-
- Computer Speech and Language
-
Computer Speech and Language 57 108-127, 2019-09
Elsevier BV
- Tweet
キーワード
詳細情報 詳細情報について
-
- CRID
- 1050564288168698752
-
- NII論文ID
- 120006605392
-
- ISSN
- 08852308
- 10958363
-
- HANDLE
- 2433/240842
-
- 本文言語コード
- en
-
- 資料種別
- journal article
-
- データソース種別
-
- IRDB
- CiNii Articles