Attention-based ASR with Lightweight and Dynamic Convolutions
Yuya Fujita, Aswin Shanmugam Subramanian (Johns Hopkins University), Motoi Omachi, Shinji Watanabe (Johns Hopkins University)
45th International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2020), 2020/5
音声処理 (Speech Processing)
- End-to-end (E2E) automatic speech recognition (ASR) with sequence-to-sequence models has gained attention because of its simple model training compared with conventional hidden Markov model based ASR. Recently, several studies report the state-of-the-art E2E ASR results obtained by the Transformer. Compared to recurrent neural network based E2E models, training of the Transformer is faster and also achieved better performance on various tasks. However, self-attention used in the Transformer requires the quadratic order of computation in its input length. In this paper, we propose to apply lightweight and dynamic convolution to E2E ASR as an alternative architecture to the self-attention to make the computational order linear. We also propose joint training with connectionist temporal classification, convolution on frequency axis, and combination with self-attention. With these techniques, the proposed architectures achieved comparable/superior performance to the Transformer on various ASR benchmarks including noisy/reverberant tasks.
Attention-based ASR with Lightweight and Dynamic Convolutions（外部サイト／External Site Link）