Fast Affine Motion Estimation for Versatile Video Coding (VVC) Encoding
- 주제(키워드) Video compression , encoding complexity , motion estimation , HEVC , VVC , affine motion , reference frame search
- 주제(기타) Computer Science, Information Systems; Engineering, Electrical & Electronic; Telecommunications
- 설명문(일반) [Park, Sang-Hyo; Kang, Je-Won] Ewha Womans Univ, Dept Elect & Elect Engn, Seoul 03760, South Korea
- 등재 SCIE, SCOPUS
- OA유형 gold
- 발행기관 IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
- 발행년도 2019
- 총서유형 Journal
- URI http://www.dcollection.net/handler/ewha/000000165953
- 본문언어 영어
- Published As http://dx.doi.org/10.1109/ACCESS.2019.2950388
초록/요약
In this paper, we propose a fast encoding method to facilitate an affine motion estimation (AME) process in versatile video coding (VVC) encoders. The recently-launched VVC project for next-generation video coding standardization far outperforms the High Efficiency Video Coding (HEVC) standard in terms of coding efficiency. The first version of the VVC test model (VTM) displays superior coding efficiency yet requires higher encoding complexity due to advanced inter-prediction techniques of the multi-type tree (MTT) structure. In particular, an AME technique in VVC is designed to reduce temporal redundancies (other than translational motion) in dynamic motions, thus achieving more accurate motion prediction. The VTM encoder, however, requires considerable computational complexity because of the AME process in the recursive MTT partitioning. In this paper, we introduce useful features that reflect the statistical characteristics of MTT and AME and propose a method that employs these features to skip redundant AME processes. Experimental results show that-when compared to VTM 3.0-the proposed method reduces the AME time of VTM to 63% on average, while the coding loss is within 0.1% in the random-access configuration.
more