Deep Audio-Visual Model for Speech Recognition
In this project, work was conducted on speech recognition, specifically predicting individual words based on both the video frames and audio. Empowered by convolutional neural networks, the recent speech recognition and lip reading models are comparable to human level performance. We reimplemented and made derivations of the state-of-the-art model. Then, we conducted experiments including the effectiveness of attention mechanism, more accurate residual network as the backbone with pre-trained weights and the sensitivity of our model with respect to audio input with/without noise.