With a Little Help from my Temporal Context: Multimodal Egocentric Action Recognition (BMVC 2021)

Share this & earn $10
Published at : November 03, 2021

arXiv: https://arxiv.org/abs/2111.01024
Project webpage: https://ekazakos.github.io/MTCN-project/

Abstract:

In egocentric videos, actions occur in quick succession. We capitalise on the action's temporal context and propose a method that learns to attend to surrounding actions in order to improve recognition performance. To incorporate the temporal context, we propose a transformer-based multimodal model that ingests video and audio as input modalities, with an explicit language model providing action sequence context to enhance the predictions. We test our approach on EPIC-KITCHENS and EGTEA datasets reporting state-of-the-art performance. Our ablations showcase the advantage of utilising temporal context as well as incorporating audio input modality and language model to rescore predictions. With a Little Help from my Temporal Context: Multimodal Egocentric Action Recognition (BMVC 2021)
LittleTemporalContext: