ADVISOR: Frédéric Devernay
TEL: 04 76 61 52 58
EMAIL: frederic.devernay at inria.fr
TEAM AND LAB: PRIMA team, Inria Rhône-Alpes & LIG
THEMES: Image, Vision
DURATION: 3 to 6 months
Multi-view video capture systems consist of several cameras (from two to dozens), capturing the same live 3D scene from different angles. Examples of such systems are stereoscopic camera rigs used for shooting 3D movies, and multi-view capture systems such as the GRIMAGE platform at Inria. Usually, these systems require perfectly synchronized cameras, so that there is no time difference between the images taken from the various viewpoints. However, having synchronized cameras is expensive, difficult, and sometimes impossible, for example when using consumer cameras or mobile devices. For this reason, we propose to do capture on an unsynchronized multi-view video setup [7,8], and to synchronize cameras after the capture, using sound [1] and/or images [2,3,4,5,6,9,10]. After all videos have been synchronized with sub-frame precision, we can apply retiming techniques, which consist in synthesizing videos, interpolated from the original video frames, which are perfectly synchronized.
Please send a motivition letter and a resumé (CV) by email to frederic.devernay at inria.fr
[1] Hasler N., Rosenhahn B., Thormählen T., Wand M., Gall J., Seidel H.-P.: Markerless Motion Capture with Unsynchronized Moving Cameras. In Proc. of CVPR'09 (Washington, June 2009), IEEE Computer Society
[2] Yaron Caspi, Denis Simakov, and Michal Irani. 2006. Feature-Based Sequence-to-Sequence Matching. Int. J. Comput. Vision 68, 1 (June 2006), 53-64. DOI:10.1007/s11263-005-4842-z http://dx.doi.org/10.1007/s11263-005-4842-z
[3] Ying Piao; Jun Sato; , "Computing Epipolar Geometry from Unsynchronized Cameras," Image Analysis and Processing, 2007. ICIAP 2007. 14th International Conference on , vol., no., pp.475-480, 10-14 Sept. 2007doi: 10.1109/ICIAP.2007.4362823
[4] Wang H., Sun M., Yang R.: Space-Time Light Field Rendering. IEEE Trans. Visualization and Computer Graphics (2007), 697-710.
[5] Anthony Whitehead, Robert Laganiere, and Prosenjit Bose. Temporal synchronization of video sequences in theory and in practice. Motion and Video Computing, 2:132-137, 2005.
[6] Meyer B., Stich T., Magnor M., Pollefeys M.: Subframe Temporal Alignment of Non-Stationary Cameras. In Proc. British Machine Vision Conference BMVC '08 (2008).
[7] Christian Lipski, Christian Linz, Kai Berger, Anita Sellent, and Marcus Magnor: "Virtual Video Camera: Image-Based Viewpoint Navigation Through Space and Time", Computer Graphics Forum, vol. 29, no. 8, pp. 2555-2568, December 2010.
[8] L. Ballan, J. Puwein, G. Brostow, M. Pollefeys, Unstructured Video-Based Rendering: Interactive Exploration of Casually Captured Videos, ACM Transactions on Graphics (SIGGRAPH 2010).
[9] S. Sinha, M. Pollefeys. Visual-Hull Reconstruction from Uncalibrated and Unsynchronized Video Streams, Second International Symposium on 3D Data Processing, Visualization & Transmission, 2004.
[10] J. Revaud, M. Douze, C. Schmid H. Jégou. Event retrieval in large video collections with circulant temporal encoding, IEEE CVPR 2013 - International Conference on Computer Vision and Pattern Recognition, <10.1109/CVPR.2013.318>
[11] G. Evangelidis and C.Bauckhage. Efficient subframe video alignment using short descriptors. Trans. PAMI, 35(10), Nov. 2013
[12] H. Jiang, H. Liu, P. Tan, G. Zhang, and H. Bao. 3D Reconstruction of dynamic scenes with multiple handheld cameras. In ECCV, 2012.
[13] T. Tuytelaars and L. V. Gool. Synchronizing video sequences. In CVPR, 2004.