9 Reconstruction from Video
Dakota Benjamin edytuje tę stronę 2018-02-02 09:41:47 -05:00

Introduction

Note: This is an experimental feature

It is possible to build a reconstruction using a video file instead of still images. The technique for reconstructing the camera trajectory from a video is called Simultaneous Localization And Mapping (SLAM). OpenDroneMap uses the opensource ORB_SLAM2 library for this task.

We will explain here how to use it. We will need to build the SLAM module, calibrate the camera and finally run the reconstruction from a video.

Building with SLAM support

By default, OpenDroneMap does not build the SLAM module. To build it we need to do the following two steps

Build SLAM dependencies

sudo apt-get install libglew-dev
cd SuperBuild/build
cmake -DODM_BUILD_SLAM=ON ..
make
cd ../..

Build the SLAM module

cd build
cmake -DODM_BUILD_SLAM=ON ..
make
cd ..

Calibrating the camera

The SLAM algorithm requires the camera to be calibrated. It is difficult to extract calibration parameters from the video's metadata as we do when using still images. Thus, it is required to run a calibration procedure that will compute the calibration from a video of a checkerboard.

We will start by recording the calibration video. Display this chessboard pattern on a large screen, or print it on a large paper and stick it on a flat surface. Now record a video pointing the camera to the chessboard.

chessboard shot

While recording move the camera to both sides and up and down always maintaining the entire pattern framed. The goal is to capture the pattern from different points of views. The resulting video should look like this.

Now you can run the calibration script as follows

python modules/odm_slam/src/calibrate_video.py --visual PATH_TO_CHESSBOARD_VIDEO.mp4

You will see a window displaying the video and the detected corners. When it finish, it will print the computed calibration parameters. They should look like this (with different values)

# Camera calibration and distortion parameters (OpenCV)
Camera.fx: 1512.91332401
Camera.fy: 1512.04223185
Camera.cx: 956.585155225
Camera.cy: 527.321715394

Camera.k1: 0.140581949184
Camera.k2: -0.292250537695
Camera.p1: 0.000188785464717
Camera.p2: 0.000611510377372
Camera.k3: 0.181424769625

Keep this text. We will use it on the next section.

Running OpenDroneMap from a video

We are now ready to run the OpenDroneMap pipeline from a video. For this we need the video and a config file for ORB_SLAM2. Here's an example config.yaml. Before using it, copy-paste the calibration parameters for your camera that you just computed on the previous section.

Put the video and the config.yaml file on an empty folder. Then run OpenDroneMap using the following command

python run.py --project-path PROJECT_PATH --video VIDEO.mp4 --slam-config config.yaml --resize-to VIDEO_WIDTH PROJECT_NAME

where PROJECT_PATH is the path to the folder containing PROJECT_NAME, PROJECT_NAME is the name of your project that contains the video and config file, VIDEO.mp4 is the name of your video, and VIDEO_WIDTH is the width of the video (for example, 1920 for an HD video). And for now, you need set use_pmvs to true in the config file.

That command will run the pipeline starting with SLAM and continuing with stereo matching and mesh reconstruction and texturing.

When done, the textured model will be in PROJECT_PATH/PROJECT_NAME/odm_texturing/odm_textured_model.obj. The point cloud created by the stereo matching algorithm will be in PROJECT_PATH/PROJECT_NAME/pmvs/recon0/models/option-0000.ply

For this video you should get something similar to

castle result