Dynamic Motion Estimation and Evolution Video Prediction Network
- 주제(키워드) Kernel , Dynamics , Convolution , Streaming media , Motion estimation , Adaptation models , Spatiotemporal phenomena , Long-term video generation and prediction , video understanding and analysis , deep learning , Convolutional Neural Network , Long Short-term Memory
- 주제(기타) Computer Science, Information Systems; Computer Science, Software Engineering; Telecommunications
- 설명문(일반) [Kim, Nayoung; Kang, Je-Won] Ewha Womans Univ, Dept Elect & Elect Engn, Seoul 03760, South Korea; [Kang, Je-Won] Ewha Womans Univ, Smart Factory Multidisciplinary Program, Seoul 03760, South Korea
- 등재 SCIE, SCOPUS
- 발행기관 IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
- 발행년도 2021
- 총서유형 Journal
- URI http://www.dcollection.net/handler/ewha/000000183848
- 본문언어 영어
- Published As http://dx.doi.org/10.1109/TMM.2020.3035281
초록/요약
Future video prediction provides valuable information that helps a computer machine understand the surrounding environment and make critical decisions in real-time. However, long-term video prediction remains a challenging problem due to the complicated spatiotemporal dynamics in a video. In this paper, we propose a dynamic motion estimation and evolution (DMEE) network model to generate unseen future videos from the observed videos in the past. Our primary contribution is to use trained kernels in convolutional neural network (CNN) and long short-term memory (LSTM) architectures, adapted to each time step and sample position, to efficiently manage spatiotemporal dynamics. DMEE uses the motion estimation (ME) and motion update (MU) kernels to predict the future video using an end-to-end prediction-update process. In the prediction, the ME kernel estimates the temporal changes. In the update step, the MU kernel combines the estimates with the previously generated frames as reference frames using a weighted average. The kernels are not only used for a current frame, but also are evolved to generate successive frames to enable temporally specific filtering. We perform qualitative performance analysis and quantitative performance analysis based on the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and video classification score developed for examining the visual quality of the generated video. It is demonstrated with experiments that our algorithm provides better qualitative and quantitative performance superior to the current state-of-the-art algorithms. Our source codes are available in https://github.com/Nayoung-Kim-ICP/Video-Generation.
more