Speech Recognition
- Recurrent Neural Network Based Language Model
- 50% reduction of Perplexity
- mixture of several Basic RNN Architectures
- Wall Street Journal task
- connectionist language models are superior to standard n gram techniques, except their high computational (training) complexity
- break the myth that language modeling is just about counting n-grams, and that the only reasonable way how to improve results is by acquiring new training dat
- Towards End-To-End Speech Recognition with Recurrent Neural Networks
- character-level speech recognition system that directly transcribes audio data with text using a recurrent neural network
- combination of the deep bidirectional LSTM recurrent neural network architecture and a modified Connectionist Temporal Classification (CTC) objective function
- word error rate
- Wall Street Journal task