검색 상세

Dynamic Motion Estimation and Evolution Video Prediction Network

초록/요약

Future video prediction provides valuable information that helps a computer machine understand the surrounding environment and make critical decisions in real-time. However, long-term video prediction remains a challenging problem due to the complicated spatiotemporal dynamics in a video. In this paper, we propose a dynamic motion estimation and evolution (DMEE) network model to generate unseen future videos from the observed videos in the past. Our primary contribution is to use trained kernels in convolutional neural network (CNN) and long short-term memory (LSTM) architectures, adapted to each time step and sample position, to efficiently manage spatiotemporal dynamics. DMEE uses the motion estimation (ME) and motion update (MU) kernels to predict the future video using an end-to-end prediction-update process. In the prediction, the ME kernel estimates the temporal changes. In the update step, the MU kernel combines the estimates with the previously generated frames as reference frames using a weighted average. The kernels are not only used for a current frame, but also are evolved to generate successive frames to enable temporally specific filtering. We perform qualitative performance analysis and quantitative performance analysis based on the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and video classification score developed for examining the visual quality of the generated video. It is demonstrated with experiments that our algorithm provides better qualitative and quantitative performance superior to the current state-of-the-art algorithms. Our source codes are available in https://github.com/Nayoung-Kim-ICP/Video-Generation.

more