View synthesis based on temporal prediction via warped motion vector fields

Abstract :

The demand for 3D content has increased over the last years as 3D displays are now widespread. View synthesis methods, such as depth-image-based-rendering, provide an efficient tool in 3D content creation or transmission, and are integrated in coding solutions for multiview video content such as 3D-HEVC. In this paper, we propose a view synthesis method that takes advantage of temporal and inter-view correlations in multiview video sequences. We use warped motion vector fields computed in reference views to obtain temporal predictions of a frame in a synthesized view and blend them with depth-image-based-rendering synthesis. Our method is shown to bring gains of 0.42dB in average when tested on several multiview sequences.

Type de document :
Communication dans un congrès
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2016), Mar 2016, Shanghai, China. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2016), 2016
Liste complète des métadonnées

https://hal-imt.archives-ouvertes.fr/hal-01287903
Contributeur : Admin Télécom Paristech <>
Soumis le : lundi 14 mars 2016 - 12:41:28
Dernière modification le : jeudi 11 janvier 2018 - 06:23:39

Identifiants

  • HAL Id : hal-01287903, version 1

Citation

Andrei Purica, M. Cagnazzo, Beatrice Pesquet-Popescu, Frederic Dufaux, Bogdan Ionescu. View synthesis based on temporal prediction via warped motion vector fields. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2016), Mar 2016, Shanghai, China. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2016), 2016. 〈hal-01287903〉

Partager

Métriques

Consultations de la notice

125