Version 1.0 (28/11/2012), by Frédéric Devernay. (c) INRIA 2012.
This software allows the correct extraction of the camera parameters from POV-Ray images generated using MegaPOV with the "annotation" patch from VLPov by Andrea Vedaldi. The annotation patch, the PNG 1.5 patch and the focal blur patch are also included with this software, look in the "povray" subdirectory of the distribution.
Using this software and a vlpov-patched POVRay, you can for example:
This library also handles POV-Ray renders where the up, right and dir vectors of the POV-Ray camera don't form an orthonormal set. This allows for cameras with:
In order to render these images, the Vista buffer feature of POV-Ray must be disabled (option -UV
in megapov).
Please check out the following documentation on how to create stereoscopic pairs with POV-Ray:
vlpovutils-1.0.tar.gz (Version 1.0 from 28/12/2012)
This softwate requires the ublas component from boost, so you may need to install development files for boost/ublas (on Debian and Ubuntu, they are part of the libboost-dev package).
Download and unpack the distribution.
On Linux/Unix, check out if the settings in the Makefile correspond to your system and type "make".
There is also an Xcode project for Mac OS X, in case you prefer it to the Makefile (you can install boost on Mac OS X using MacPorts or Homebrew).
If you want to compile it on MS Windows, you're on your own (sorry, I can't help you).
Here is how to generate some individual frames from the sample scenes (the +a0.0 +j0.0
options are for anti-aliasing):
mkdir results; cd results
megapov +w320 +h240 +a0.0 +j0.0 +L../data +Itest.pov +Otest.png
An animation with 30 frames and a forward motion:
megapov +w320 +h240 +a0.0 +j0.0 +L../data +KFI1 +KFF30 +KI0.0 +KF3.0 +Itest_anim.pov +Otest_anim.png
A camera with non-orthogonal principal axes (notice the -UV
option required for non-standard cameras):
megapov -UV +w320 +h240 +a0.0 +j0.0 +L../data +Itest_nonortho.pov +Otest_nonortho.png
A multi-viewpoint stereo set, with 10 viewpoints (output images are rectified):
megapov -UV +w320 +h240 +a0.0 +j0.0 +L../data +KFI1 +KFF10 +KI0.0 +KF1.0 +Itest_stereo.pov +Otest_stereo.png
Suppose you generated frame1.png
and frame2.png
with megapov, together with the files created by the annotation patch (frame1.depth
, frame1.txt
, frame2.depth
, frame2.txt
).
There are 3 utilities: vlpov_project
, vlpov_motionfield
, vlpov_motionfield2
. The description of each utility is given below.
Usage: vlpov_project <frame1> [<frame2>] Help: Compute the projection (x, y, depth) of 3D points in <frame1> and optionally pixel motion to <frame2>. 3D points coordinates are read from standard input. Arguments: <frame1> base frame basename (file with extension .txt will be read) <frame2> second frame basename (file with extension .txt will be read) *Important note*: The 3D points must be given in a right-handed coordinate system, where the Z is the *opposite* of POV-Ray's Z.
Usage: vlpov_motionfield <frame1> <frame2> Help: Compute a motion field from a POV-Ray rendered depth map to another camera The motion field is written to <frame1>.<frame2>.mx and and <frame1>.<frame2>.my as raw images containing big-endian (network-order) doubles - the same format as the depth map. Arguments: <frame1> first frame basename (files with extensions .depth and .txt will be read) <frame2> second frame basename (file with extension .txt will be read) Output files: <frame1>.<frame2>.mx motion field, x-component (raw big-endian doubles) <frame1>.<frame2>.my motion field, y-component (raw big-endian doublest)
Usage: vlpov_motionfield2 <frame1> <frame2> Help: Compute a motion field from a POV-Ray rendered depth map to another camera Arguments: <frame1> first frame basename (files with extensions .depth and .txt will be read) <frame2> second frame basename (files with extensions .depth and .txt will be read) Output files (all are in TIFF format): <frame1>.<frame2>.mx.tif motion field, x-component (1-channel float) <frame1>.<frame2>.my.tif motion field, y-component (1-channel float) <frame1>.<frame2>.occ.tif occlusion map (1-channel uchar, 255=visible, 0=occluded) <frame2>.<frame1>.mx.tif motion field, x-component (1-channel float) <frame2>.<frame1>.my.tif motion field, y-component (1-channel float) <frame2>.<frame1>.occ.tif occlusion map (1-channel uchar, 255=visible, 0=occluded)
All the functionalities are also directly accessible from C or C++ using libvlpov. See the library header (vlpov.hpp) for the documentation. The library replicates some of the functionalities of the Matlab files distributed with VLPov.
Realistic POV-Ray scenes with source files can be found at various places.
Good starting points are:
- Frédéric Devernay, Nov. 2012