AVLnet: Learning Audio-Visual Language Representations from Instructional Videos Andrew Rouditchenko*, Angie Boggust*, David Harwath, Dhiraj Joshi, Samuel Thomas, Kartik Audhkhasi, Brian Kingsbury, Dan Gutfreund, Michael Picheny, Antonio Torralba, James Glass arXiv:2006.09199

AVLnet (Audio-Visual Language Network) model architecture.


Current methods for learning visually grounded language from videos often rely on time-consuming and expensive data collection, such as human annotated textual summaries or machine generated automatic speech recognition transcripts. In this work, we introduce Audio-Video Language Network (AVLnet), a self-supervised network that learns a shared audio-visual embedding space directly from raw video inputs. We circumvent the need for annotation and instead learn audio-visual language representations directly from randomly segmented video clips and their raw audio waveforms. We train AVLnet on publicly available instructional videos and evaluate our model on video clip and language retrieval tasks on three video datasets. Our proposed model outperforms several state-of-the-art text-video baselines by up to 11.8% in a video clip retrieval task, despite operating on the raw audio instead of manually annotated text captions. Further, we show AVLnet is capable of integrating textual information, increasing its modularity and improving performance by up to 20.3% on the video clip retrieval task. Finally, we perform analysis of AVLnet's learned representations, showing our model has learned to relate visual objects with salient words and natural sounds.

  title={AVLnet: Learning Audio-Visual Language Representations from Instructional Videos},
  author={Rouditchenko, Andrew and Boggust, Angie and Harwath, David and Joshi, Dhiraj and 
  Thomas, Samuel and Audhkhasi, Kartik and Feris, Rogerio and Kingsbury, Brian and 
  Picheny, Michael and Torralba, Antonio and Glass, James},
  journal={arXiv preprint arXiv:2006.09199},