Convolutional Tensor-Train LSTM for Spatio-temporal Learning

J Su*, W Byeon*, F Huang, J Kautz, A Anandkumar | (*) equal contributions | NeurIPS 2020

[arxiv]   [paper]   [Supp]   [code]   [project page]

Abstract

Learning from spatio-temporal data has numerous applications such as human-behavior analysis, object tracking, video compression, and physics simulation.However, existing methods still perform poorly on challenging video tasks suchas long-term forecasting. This is because these kinds of challenging tasks requirelearning long-term spatio-temporal correlations in the video sequence. In this paper, we propose a higher-order convolutional LSTM model that can efficientlylearn these correlations, along with a succinct representations of the history. This is accomplished through a novel tensor train module that performs prediction bycombining convolutional features across time. To make this feasible in terms ofcomputation and memory requirements, we propose a novel convolutional tensor-train decomposition of the higher-order model. This decomposition reduces themodel complexity by jointly approximating a sequence of convolutional kernels asa low-rank tensor-train factorization. As a result, our model outperforms existingapproaches, but uses only a fraction of parameters, including the baseline models.Our results achieve state-of-the-art performance in a wide range of applicationsand datasets, including the multi-steps video prediction on the Moving-MNIST-2and KTH action datasets as well as early activity recognition on the Something-Something V2 dataset.

Video Prediction Results

Early Activity Recognition Results

@misc{su2020convolutional,
    title={Convolutional Tensor-Train LSTM for Spatio-temporal Learning},
    author={Jiahao Su and Wonmin Byeon and Furong Huang and Jan Kautz and Animashree Anandkumar},
    year={2020},
    eprint={2002.09131},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}