Modeling sub-event dynamics in first-person action recognition

First-person videos have unique characteristics such as heavy egocentric motion, strong preceding events, salient transitional activities and post-event impacts. Action recognition methods designed for third person videos may not optimally represent actions captured by first-person videos. We prop...

Full description

Bibliographic Details
Main Authors: Mohd Zaki, Hasan Firdaus, Shafait, Faisal, Mian, Ajmal S.
Format: Conference or Workshop Item
Language:English
English
Published: IEEE 2017
Subjects:
Online Access:http://irep.iium.edu.my/64353/
http://irep.iium.edu.my/64353/
http://irep.iium.edu.my/64353/
http://irep.iium.edu.my/64353/8/64353%20Modeling%20Sub-Event%20Dynamics%20in%20First-Person%20Action%20Recognition.pdf
http://irep.iium.edu.my/64353/7/64353%20Modeling%20sub-event%20dynamics%20in%20first-person%20action%20recognition%20SCOPUS.pdf
Description
Summary:First-person videos have unique characteristics such as heavy egocentric motion, strong preceding events, salient transitional activities and post-event impacts. Action recognition methods designed for third person videos may not optimally represent actions captured by first-person videos. We propose a method to represent the high level dynamics of sub-events in first-person videos by dynamically pooling features of sub-intervals of time series using a temporal feature pooling function. The sub-event dynamics are then temporally aligned to make a new series. To keep track of how the sub-event dynamics evolve over time, we recursively employ the Fast Fourier Transform on a pyramidal temporal structure. The Fourier coefficients of the segment define the overall video representation. We perform experiments on two existing benchmark first-person video datasets which have been captured in a controlled environment. Addressing this gap, we introduce a new dataset collected from YouTube which has a larger number of classes and a greater diversity of capture conditions thereby more closely depicting real-world challenges in first-person video analysis. We compare our method to state-of-the-art first person and generic video recognition algorithms. Our method consistently outperforms the nearest competitors by 10.3%, 3.3% and 11.7% respectively on the three datasets.