TY - GEN
T1 - Gradient-based 2D-To-3D Conversion for Soccer Videos
AU - Calagari, Kiana
AU - Elgharib, Mohamed
AU - Didyk, Piotr
AU - Kaspar, Alexandre
AU - Matusik, Wojciech
AU - Hefeeda, Mohamed
N1 - Publisher Copyright:
© 2015 ACM.
PY - 2015/10/13
Y1 - 2015/10/13
N2 - A wide spread adoption of 3D videos and technologies is hindered by the lack of high-quality 3D content. One promis-ing solution to address this problem is to use automated 2D-To-3D conversion. However, current conversion meth-ods, while general, produce low-quality results with artifacts that are not acceptable to many viewers. We address this problem by showing how to construct a high-quality, domain-specific conversion method for soccer videos. We propose a novel, data-driven method that generates stereoscopic frames by transferring depth information from similar frames in a database of 3D stereoscopic videos. Creating a database of 3D stereoscopic videos with accurate depth is, however, very dificult. One of the key findings in this paper is showing that computer generated content in current sports computer games can be used to generate high-quality 3D video ref-erence database for 2D-To-3D conversion methods. Once we retrieve similar 3D video frames, our technique transfers depth gradients to the target frame while respecting object boundaries. It then computes depth maps from the gradients, and generates the output stereoscopic video. We implement our method and validate it by conducting user-studies that evaluate depth perception and visual comfort of the converted 3D videos. We show that our method produces high-quality 3D videos that are almost indistinguishable from videos shot by stereo cameras. In addition, our method signifocantly out-performs the current state-of-The-Art method. For example, up to 20% improvement in the perceived depth is achieved by our method, which translates to improving the mean opinion score from Good to Excellent.
AB - A wide spread adoption of 3D videos and technologies is hindered by the lack of high-quality 3D content. One promis-ing solution to address this problem is to use automated 2D-To-3D conversion. However, current conversion meth-ods, while general, produce low-quality results with artifacts that are not acceptable to many viewers. We address this problem by showing how to construct a high-quality, domain-specific conversion method for soccer videos. We propose a novel, data-driven method that generates stereoscopic frames by transferring depth information from similar frames in a database of 3D stereoscopic videos. Creating a database of 3D stereoscopic videos with accurate depth is, however, very dificult. One of the key findings in this paper is showing that computer generated content in current sports computer games can be used to generate high-quality 3D video ref-erence database for 2D-To-3D conversion methods. Once we retrieve similar 3D video frames, our technique transfers depth gradients to the target frame while respecting object boundaries. It then computes depth maps from the gradients, and generates the output stereoscopic video. We implement our method and validate it by conducting user-studies that evaluate depth perception and visual comfort of the converted 3D videos. We show that our method produces high-quality 3D videos that are almost indistinguishable from videos shot by stereo cameras. In addition, our method signifocantly out-performs the current state-of-The-Art method. For example, up to 20% improvement in the perceived depth is achieved by our method, which translates to improving the mean opinion score from Good to Excellent.
KW - 2D-To-3D conversion
KW - 3D video
KW - Depth estimation
UR - https://www.scopus.com/pages/publications/84962878065
U2 - 10.1145/2733373.2806262
DO - 10.1145/2733373.2806262
M3 - Conference contribution
AN - SCOPUS:84962878065
T3 - MM 2015 - Proceedings of the 2015 ACM Multimedia Conference
SP - 331
EP - 340
BT - MM 2015 - Proceedings of the 2015 ACM Multimedia Conference
PB - Association for Computing Machinery, Inc
T2 - 23rd ACM International Conference on Multimedia, MM 2015
Y2 - 26 October 2015 through 30 October 2015
ER -