kopia lustrzana https://github.com/OpenDroneMap/ODM
Merge pull request #1210 from pierotofy/230
Bug fixes, speed improvements and license change to AGPLpull/1221/head
commit
050a7ff8cc
|
@ -19,7 +19,7 @@ RUN rm -rf \
|
||||||
/code/SuperBuild/build/opencv \
|
/code/SuperBuild/build/opencv \
|
||||||
/code/SuperBuild/download \
|
/code/SuperBuild/download \
|
||||||
/code/SuperBuild/src/ceres \
|
/code/SuperBuild/src/ceres \
|
||||||
/code/SuperBuild/src/entwine \
|
/code/SuperBuild/src/untwine \
|
||||||
/code/SuperBuild/src/gflags \
|
/code/SuperBuild/src/gflags \
|
||||||
/code/SuperBuild/src/hexer \
|
/code/SuperBuild/src/hexer \
|
||||||
/code/SuperBuild/src/lastools \
|
/code/SuperBuild/src/lastools \
|
||||||
|
|
153
LICENSE
153
LICENSE
|
@ -1,23 +1,21 @@
|
||||||
GNU GENERAL PUBLIC LICENSE
|
GNU AFFERO GENERAL PUBLIC LICENSE
|
||||||
Version 3, 29 June 2007
|
Version 3, 19 November 2007
|
||||||
|
|
||||||
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
|
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||||
Everyone is permitted to copy and distribute verbatim copies
|
Everyone is permitted to copy and distribute verbatim copies
|
||||||
of this license document, but changing it is not allowed.
|
of this license document, but changing it is not allowed.
|
||||||
|
|
||||||
Preamble
|
Preamble
|
||||||
|
|
||||||
The GNU General Public License is a free, copyleft license for
|
The GNU Affero General Public License is a free, copyleft license for
|
||||||
software and other kinds of works.
|
software and other kinds of works, specifically designed to ensure
|
||||||
|
cooperation with the community in the case of network server software.
|
||||||
|
|
||||||
The licenses for most software and other practical works are designed
|
The licenses for most software and other practical works are designed
|
||||||
to take away your freedom to share and change the works. By contrast,
|
to take away your freedom to share and change the works. By contrast,
|
||||||
the GNU General Public License is intended to guarantee your freedom to
|
our General Public Licenses are intended to guarantee your freedom to
|
||||||
share and change all versions of a program--to make sure it remains free
|
share and change all versions of a program--to make sure it remains free
|
||||||
software for all its users. We, the Free Software Foundation, use the
|
software for all its users.
|
||||||
GNU General Public License for most of our software; it applies also to
|
|
||||||
any other work released this way by its authors. You can apply it to
|
|
||||||
your programs, too.
|
|
||||||
|
|
||||||
When we speak of free software, we are referring to freedom, not
|
When we speak of free software, we are referring to freedom, not
|
||||||
price. Our General Public Licenses are designed to make sure that you
|
price. Our General Public Licenses are designed to make sure that you
|
||||||
|
@ -26,44 +24,34 @@ them if you wish), that you receive source code or can get it if you
|
||||||
want it, that you can change the software or use pieces of it in new
|
want it, that you can change the software or use pieces of it in new
|
||||||
free programs, and that you know you can do these things.
|
free programs, and that you know you can do these things.
|
||||||
|
|
||||||
To protect your rights, we need to prevent others from denying you
|
Developers that use our General Public Licenses protect your rights
|
||||||
these rights or asking you to surrender the rights. Therefore, you have
|
with two steps: (1) assert copyright on the software, and (2) offer
|
||||||
certain responsibilities if you distribute copies of the software, or if
|
you this License which gives you legal permission to copy, distribute
|
||||||
you modify it: responsibilities to respect the freedom of others.
|
and/or modify the software.
|
||||||
|
|
||||||
For example, if you distribute copies of such a program, whether
|
A secondary benefit of defending all users' freedom is that
|
||||||
gratis or for a fee, you must pass on to the recipients the same
|
improvements made in alternate versions of the program, if they
|
||||||
freedoms that you received. You must make sure that they, too, receive
|
receive widespread use, become available for other developers to
|
||||||
or can get the source code. And you must show them these terms so they
|
incorporate. Many developers of free software are heartened and
|
||||||
know their rights.
|
encouraged by the resulting cooperation. However, in the case of
|
||||||
|
software used on network servers, this result may fail to come about.
|
||||||
|
The GNU General Public License permits making a modified version and
|
||||||
|
letting the public access it on a server without ever releasing its
|
||||||
|
source code to the public.
|
||||||
|
|
||||||
Developers that use the GNU GPL protect your rights with two steps:
|
The GNU Affero General Public License is designed specifically to
|
||||||
(1) assert copyright on the software, and (2) offer you this License
|
ensure that, in such cases, the modified source code becomes available
|
||||||
giving you legal permission to copy, distribute and/or modify it.
|
to the community. It requires the operator of a network server to
|
||||||
|
provide the source code of the modified version running there to the
|
||||||
|
users of that server. Therefore, public use of a modified version, on
|
||||||
|
a publicly accessible server, gives the public access to the source
|
||||||
|
code of the modified version.
|
||||||
|
|
||||||
For the developers' and authors' protection, the GPL clearly explains
|
An older license, called the Affero General Public License and
|
||||||
that there is no warranty for this free software. For both users' and
|
published by Affero, was designed to accomplish similar goals. This is
|
||||||
authors' sake, the GPL requires that modified versions be marked as
|
a different license, not a version of the Affero GPL, but Affero has
|
||||||
changed, so that their problems will not be attributed erroneously to
|
released a new version of the Affero GPL which permits relicensing under
|
||||||
authors of previous versions.
|
this license.
|
||||||
|
|
||||||
Some devices are designed to deny users access to install or run
|
|
||||||
modified versions of the software inside them, although the manufacturer
|
|
||||||
can do so. This is fundamentally incompatible with the aim of
|
|
||||||
protecting users' freedom to change the software. The systematic
|
|
||||||
pattern of such abuse occurs in the area of products for individuals to
|
|
||||||
use, which is precisely where it is most unacceptable. Therefore, we
|
|
||||||
have designed this version of the GPL to prohibit the practice for those
|
|
||||||
products. If such problems arise substantially in other domains, we
|
|
||||||
stand ready to extend this provision to those domains in future versions
|
|
||||||
of the GPL, as needed to protect the freedom of users.
|
|
||||||
|
|
||||||
Finally, every program is threatened constantly by software patents.
|
|
||||||
States should not allow patents to restrict development and use of
|
|
||||||
software on general-purpose computers, but in those that do, we wish to
|
|
||||||
avoid the special danger that patents applied to a free program could
|
|
||||||
make it effectively proprietary. To prevent this, the GPL assures that
|
|
||||||
patents cannot be used to render the program non-free.
|
|
||||||
|
|
||||||
The precise terms and conditions for copying, distribution and
|
The precise terms and conditions for copying, distribution and
|
||||||
modification follow.
|
modification follow.
|
||||||
|
@ -72,7 +60,7 @@ modification follow.
|
||||||
|
|
||||||
0. Definitions.
|
0. Definitions.
|
||||||
|
|
||||||
"This License" refers to version 3 of the GNU General Public License.
|
"This License" refers to version 3 of the GNU Affero General Public License.
|
||||||
|
|
||||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||||
works, such as semiconductor masks.
|
works, such as semiconductor masks.
|
||||||
|
@ -549,35 +537,45 @@ to collect a royalty for further conveying from those to whom you convey
|
||||||
the Program, the only way you could satisfy both those terms and this
|
the Program, the only way you could satisfy both those terms and this
|
||||||
License would be to refrain entirely from conveying the Program.
|
License would be to refrain entirely from conveying the Program.
|
||||||
|
|
||||||
13. Use with the GNU Affero General Public License.
|
13. Remote Network Interaction; Use with the GNU General Public License.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, if you modify the
|
||||||
|
Program, your modified version must prominently offer all users
|
||||||
|
interacting with it remotely through a computer network (if your version
|
||||||
|
supports such interaction) an opportunity to receive the Corresponding
|
||||||
|
Source of your version by providing access to the Corresponding Source
|
||||||
|
from a network server at no charge, through some standard or customary
|
||||||
|
means of facilitating copying of software. This Corresponding Source
|
||||||
|
shall include the Corresponding Source for any work covered by version 3
|
||||||
|
of the GNU General Public License that is incorporated pursuant to the
|
||||||
|
following paragraph.
|
||||||
|
|
||||||
Notwithstanding any other provision of this License, you have
|
Notwithstanding any other provision of this License, you have
|
||||||
permission to link or combine any covered work with a work licensed
|
permission to link or combine any covered work with a work licensed
|
||||||
under version 3 of the GNU Affero General Public License into a single
|
under version 3 of the GNU General Public License into a single
|
||||||
combined work, and to convey the resulting work. The terms of this
|
combined work, and to convey the resulting work. The terms of this
|
||||||
License will continue to apply to the part which is the covered work,
|
License will continue to apply to the part which is the covered work,
|
||||||
but the special requirements of the GNU Affero General Public License,
|
but the work with which it is combined will remain governed by version
|
||||||
section 13, concerning interaction through a network will apply to the
|
3 of the GNU General Public License.
|
||||||
combination as such.
|
|
||||||
|
|
||||||
14. Revised Versions of this License.
|
14. Revised Versions of this License.
|
||||||
|
|
||||||
The Free Software Foundation may publish revised and/or new versions of
|
The Free Software Foundation may publish revised and/or new versions of
|
||||||
the GNU General Public License from time to time. Such new versions will
|
the GNU Affero General Public License from time to time. Such new versions
|
||||||
be similar in spirit to the present version, but may differ in detail to
|
will be similar in spirit to the present version, but may differ in detail to
|
||||||
address new problems or concerns.
|
address new problems or concerns.
|
||||||
|
|
||||||
Each version is given a distinguishing version number. If the
|
Each version is given a distinguishing version number. If the
|
||||||
Program specifies that a certain numbered version of the GNU General
|
Program specifies that a certain numbered version of the GNU Affero General
|
||||||
Public License "or any later version" applies to it, you have the
|
Public License "or any later version" applies to it, you have the
|
||||||
option of following the terms and conditions either of that numbered
|
option of following the terms and conditions either of that numbered
|
||||||
version or of any later version published by the Free Software
|
version or of any later version published by the Free Software
|
||||||
Foundation. If the Program does not specify a version number of the
|
Foundation. If the Program does not specify a version number of the
|
||||||
GNU General Public License, you may choose any version ever published
|
GNU Affero General Public License, you may choose any version ever published
|
||||||
by the Free Software Foundation.
|
by the Free Software Foundation.
|
||||||
|
|
||||||
If the Program specifies that a proxy can decide which future
|
If the Program specifies that a proxy can decide which future
|
||||||
versions of the GNU General Public License can be used, that proxy's
|
versions of the GNU Affero General Public License can be used, that proxy's
|
||||||
public statement of acceptance of a version permanently authorizes you
|
public statement of acceptance of a version permanently authorizes you
|
||||||
to choose that version for the Program.
|
to choose that version for the Program.
|
||||||
|
|
||||||
|
@ -631,44 +629,33 @@ to attach them to the start of each source file to most effectively
|
||||||
state the exclusion of warranty; and each file should have at least
|
state the exclusion of warranty; and each file should have at least
|
||||||
the "copyright" line and a pointer to where the full notice is found.
|
the "copyright" line and a pointer to where the full notice is found.
|
||||||
|
|
||||||
{one line to give the program's name and a brief idea of what it does.}
|
<one line to give the program's name and a brief idea of what it does.>
|
||||||
Copyright (C) {year} {name of author}
|
Copyright (C) <year> <name of author>
|
||||||
|
|
||||||
This program is free software: you can redistribute it and/or modify
|
This program is free software: you can redistribute it and/or modify
|
||||||
it under the terms of the GNU General Public License as published by
|
it under the terms of the GNU Affero General Public License as published
|
||||||
the Free Software Foundation, either version 3 of the License, or
|
by the Free Software Foundation, either version 3 of the License, or
|
||||||
(at your option) any later version.
|
(at your option) any later version.
|
||||||
|
|
||||||
This program is distributed in the hope that it will be useful,
|
This program is distributed in the hope that it will be useful,
|
||||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
GNU General Public License for more details.
|
GNU Affero General Public License for more details.
|
||||||
|
|
||||||
You should have received a copy of the GNU General Public License
|
You should have received a copy of the GNU Affero General Public License
|
||||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
Also add information on how to contact you by electronic and paper mail.
|
Also add information on how to contact you by electronic and paper mail.
|
||||||
|
|
||||||
If the program does terminal interaction, make it output a short
|
If your software can interact with users remotely through a computer
|
||||||
notice like this when it starts in an interactive mode:
|
network, you should also make sure that it provides a way for users to
|
||||||
|
get its source. For example, if your program is a web application, its
|
||||||
{project} Copyright (C) {year} {fullname}
|
interface could display a "Source" link that leads users to an archive
|
||||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
of the code. There are many ways you could offer source, and different
|
||||||
This is free software, and you are welcome to redistribute it
|
solutions will be better for different programs; see section 13 for the
|
||||||
under certain conditions; type `show c' for details.
|
specific requirements.
|
||||||
|
|
||||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
|
||||||
parts of the General Public License. Of course, your program's commands
|
|
||||||
might be different; for a GUI interface, you would use an "about box".
|
|
||||||
|
|
||||||
You should also get your employer (if you work as a programmer) or school,
|
You should also get your employer (if you work as a programmer) or school,
|
||||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||||
For more information on this, and how to apply and follow the GNU GPL, see
|
For more information on this, and how to apply and follow the GNU AGPL, see
|
||||||
<http://www.gnu.org/licenses/>.
|
<https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
The GNU General Public License does not permit incorporating your program
|
|
||||||
into proprietary programs. If your program is a subroutine library, you
|
|
||||||
may consider it more useful to permit linking proprietary applications with
|
|
||||||
the library. If this is what you want to do, use the GNU Lesser General
|
|
||||||
Public License instead of this License. But first, please read
|
|
||||||
<http://www.gnu.org/philosophy/why-not-lgpl.html>.
|
|
|
@ -108,7 +108,7 @@ set(custom_libs OpenSfM
|
||||||
LASzip
|
LASzip
|
||||||
Zstd
|
Zstd
|
||||||
PDAL
|
PDAL
|
||||||
Entwine
|
Untwine
|
||||||
MvsTexturing
|
MvsTexturing
|
||||||
OpenMVS
|
OpenMVS
|
||||||
)
|
)
|
||||||
|
|
|
@ -20,7 +20,7 @@ ExternalProject_Add(${_proj_name}
|
||||||
#--Download step--------------
|
#--Download step--------------
|
||||||
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
|
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
|
||||||
GIT_REPOSITORY https://github.com/OpenDroneMap/openMVS
|
GIT_REPOSITORY https://github.com/OpenDroneMap/openMVS
|
||||||
GIT_TAG 210
|
GIT_TAG 230
|
||||||
#--Update/Patch step----------
|
#--Update/Patch step----------
|
||||||
UPDATE_COMMAND ""
|
UPDATE_COMMAND ""
|
||||||
#--Configure step-------------
|
#--Configure step-------------
|
||||||
|
|
|
@ -9,7 +9,7 @@ ExternalProject_Add(${_proj_name}
|
||||||
#--Download step--------------
|
#--Download step--------------
|
||||||
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
|
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
|
||||||
GIT_REPOSITORY https://github.com/OpenDroneMap/OpenSfM/
|
GIT_REPOSITORY https://github.com/OpenDroneMap/OpenSfM/
|
||||||
GIT_TAG 221
|
GIT_TAG 230
|
||||||
#--Update/Patch step----------
|
#--Update/Patch step----------
|
||||||
UPDATE_COMMAND git submodule update --init --recursive
|
UPDATE_COMMAND git submodule update --init --recursive
|
||||||
#--Configure step-------------
|
#--Configure step-------------
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
set(_proj_name entwine)
|
set(_proj_name untwine)
|
||||||
set(_SB_BINARY_DIR "${SB_BINARY_DIR}/${_proj_name}")
|
set(_SB_BINARY_DIR "${SB_BINARY_DIR}/${_proj_name}")
|
||||||
|
|
||||||
ExternalProject_Add(${_proj_name}
|
ExternalProject_Add(${_proj_name}
|
||||||
|
@ -8,16 +8,14 @@ ExternalProject_Add(${_proj_name}
|
||||||
STAMP_DIR ${_SB_BINARY_DIR}/stamp
|
STAMP_DIR ${_SB_BINARY_DIR}/stamp
|
||||||
#--Download step--------------
|
#--Download step--------------
|
||||||
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
|
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
|
||||||
GIT_REPOSITORY https://github.com/connormanning/entwine/
|
GIT_REPOSITORY https://github.com/pierotofy/untwine/
|
||||||
GIT_TAG 2.1.0
|
GIT_TAG insttgt
|
||||||
#--Update/Patch step----------
|
#--Update/Patch step----------
|
||||||
UPDATE_COMMAND ""
|
UPDATE_COMMAND ""
|
||||||
#--Configure step-------------
|
#--Configure step-------------
|
||||||
SOURCE_DIR ${SB_SOURCE_DIR}/${_proj_name}
|
SOURCE_DIR ${SB_SOURCE_DIR}/${_proj_name}
|
||||||
CMAKE_ARGS
|
CMAKE_ARGS
|
||||||
-DCMAKE_CXX_FLAGS=-isystem\ ${SB_SOURCE_DIR}/pdal
|
-DPDAL_DIR=${SB_INSTALL_DIR}/lib/cmake/PDAL
|
||||||
-DADDITIONAL_LINK_DIRECTORIES_PATHS=${SB_INSTALL_DIR}/lib
|
|
||||||
-DWITH_TESTS=OFF
|
|
||||||
-DCMAKE_BUILD_TYPE=Release
|
-DCMAKE_BUILD_TYPE=Release
|
||||||
-DCMAKE_INSTALL_PREFIX:PATH=${SB_INSTALL_DIR}
|
-DCMAKE_INSTALL_PREFIX:PATH=${SB_INSTALL_DIR}
|
||||||
#--Build step-----------------
|
#--Build step-----------------
|
2
VERSION
2
VERSION
|
@ -1 +1 @@
|
||||||
2.2.1
|
2.3.0
|
||||||
|
|
|
@ -1,42 +0,0 @@
|
||||||
project(odm_slam)
|
|
||||||
cmake_minimum_required(VERSION 2.8)
|
|
||||||
|
|
||||||
# Set opencv dir to the input spedified with option -DOPENCV_DIR="path"
|
|
||||||
set(OPENCV_DIR "OPENCV_DIR-NOTFOUND" CACHE "OPENCV_DIR" "Path to the opencv installation directory")
|
|
||||||
|
|
||||||
# Add compiler options.
|
|
||||||
add_definitions(-Wall -Wextra)
|
|
||||||
|
|
||||||
# Find pcl at the location specified by PCL_DIR
|
|
||||||
find_package(VTK 6.0 REQUIRED)
|
|
||||||
find_package(PCL 1.8 HINTS "${PCL_DIR}/share/pcl-1.8" REQUIRED)
|
|
||||||
|
|
||||||
# Find OpenCV at the default location
|
|
||||||
find_package(OpenCV HINTS "${OPENCV_DIR}" REQUIRED)
|
|
||||||
|
|
||||||
# Only link with required opencv modules.
|
|
||||||
set(OpenCV_LIBS opencv_core opencv_imgproc opencv_highgui)
|
|
||||||
|
|
||||||
# Add the Eigen and OpenCV include dirs.
|
|
||||||
# Necessary since the PCL_INCLUDE_DIR variable set by find_package is broken.)
|
|
||||||
include_directories(${EIGEN_ROOT})
|
|
||||||
include_directories(${OpenCV_INCLUDE_DIRS})
|
|
||||||
|
|
||||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fPIC -std=c++11")
|
|
||||||
|
|
||||||
set(PANGOLIN_ROOT ${CMAKE_BINARY_DIR}/../SuperBuild/install)
|
|
||||||
|
|
||||||
set(ORB_SLAM_ROOT ${CMAKE_BINARY_DIR}/../SuperBuild/src/orb_slam2)
|
|
||||||
|
|
||||||
include_directories(${EIGEN_ROOT})
|
|
||||||
include_directories(${ORB_SLAM_ROOT})
|
|
||||||
include_directories(${ORB_SLAM_ROOT}/include)
|
|
||||||
link_directories(${PANGOLIN_ROOT}/lib)
|
|
||||||
link_directories(${ORB_SLAM_ROOT}/lib)
|
|
||||||
|
|
||||||
# Add source directory
|
|
||||||
aux_source_directory("./src" SRC_LIST)
|
|
||||||
|
|
||||||
# Add exectuteable
|
|
||||||
add_executable(${PROJECT_NAME} ${SRC_LIST})
|
|
||||||
target_link_libraries(odm_slam ${OpenCV_LIBS} ORB_SLAM2 pangolin)
|
|
|
@ -1,98 +0,0 @@
|
||||||
#include <iostream>
|
|
||||||
|
|
||||||
#include <opencv2/opencv.hpp>
|
|
||||||
|
|
||||||
#include <System.h>
|
|
||||||
#include <Converter.h>
|
|
||||||
|
|
||||||
|
|
||||||
void SaveKeyFrameTrajectory(ORB_SLAM2::Map *map, const string &filename, const string &tracksfile) {
|
|
||||||
std::cout << std::endl << "Saving keyframe trajectory to " << filename << " ..." << std::endl;
|
|
||||||
|
|
||||||
vector<ORB_SLAM2::KeyFrame*> vpKFs = map->GetAllKeyFrames();
|
|
||||||
sort(vpKFs.begin(), vpKFs.end(), ORB_SLAM2::KeyFrame::lId);
|
|
||||||
|
|
||||||
std::ofstream f;
|
|
||||||
f.open(filename.c_str());
|
|
||||||
f << fixed;
|
|
||||||
|
|
||||||
std::ofstream fpoints;
|
|
||||||
fpoints.open(tracksfile.c_str());
|
|
||||||
fpoints << fixed;
|
|
||||||
|
|
||||||
for(size_t i = 0; i < vpKFs.size(); i++) {
|
|
||||||
ORB_SLAM2::KeyFrame* pKF = vpKFs[i];
|
|
||||||
|
|
||||||
if(pKF->isBad())
|
|
||||||
continue;
|
|
||||||
|
|
||||||
cv::Mat R = pKF->GetRotation().t();
|
|
||||||
vector<float> q = ORB_SLAM2::Converter::toQuaternion(R);
|
|
||||||
cv::Mat t = pKF->GetCameraCenter();
|
|
||||||
f << setprecision(6) << pKF->mTimeStamp << setprecision(7) << " " << t.at<float>(0) << " " << t.at<float>(1) << " " << t.at<float>(2)
|
|
||||||
<< " " << q[0] << " " << q[1] << " " << q[2] << " " << q[3] << std::endl;
|
|
||||||
|
|
||||||
for (auto point : pKF->GetMapPoints()) {
|
|
||||||
auto coords = point->GetWorldPos();
|
|
||||||
fpoints << setprecision(6)
|
|
||||||
<< pKF->mTimeStamp
|
|
||||||
<< " " << point->mnId
|
|
||||||
<< setprecision(7)
|
|
||||||
<< " " << coords.at<float>(0, 0)
|
|
||||||
<< " " << coords.at<float>(1, 0)
|
|
||||||
<< " " << coords.at<float>(2, 0)
|
|
||||||
<< std::endl;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
f.close();
|
|
||||||
fpoints.close();
|
|
||||||
std::cout << std::endl << "trajectory saved!" << std::endl;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
int main(int argc, char **argv) {
|
|
||||||
if(argc != 4) {
|
|
||||||
std::cerr << std::endl <<
|
|
||||||
"Usage: " << argv[0] << " vocabulary settings video" <<
|
|
||||||
std::endl;
|
|
||||||
return 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
cv::VideoCapture cap(argv[3]);
|
|
||||||
if(!cap.isOpened()) {
|
|
||||||
std::cerr << "Failed to load video: " << argv[3] << std::endl;
|
|
||||||
return -1;
|
|
||||||
}
|
|
||||||
|
|
||||||
ORB_SLAM2::System SLAM(argv[1], argv[2], ORB_SLAM2::System::MONOCULAR, true);
|
|
||||||
|
|
||||||
usleep(10 * 1e6);
|
|
||||||
|
|
||||||
std::cout << "Start processing video ..." << std::endl;
|
|
||||||
|
|
||||||
double T = 0.1; // Seconds between frames
|
|
||||||
cv::Mat im;
|
|
||||||
int num_frames = cap.get(CV_CAP_PROP_FRAME_COUNT);
|
|
||||||
for(int ni = 0;; ++ni){
|
|
||||||
std::cout << "processing frame " << ni << "/" << num_frames << std::endl;
|
|
||||||
// Get frame
|
|
||||||
bool res = false;
|
|
||||||
for (int trial = 0; !res && trial < 20; ++trial) {
|
|
||||||
std::cout << "trial " << trial << std::endl;
|
|
||||||
res = cap.read(im);
|
|
||||||
}
|
|
||||||
if(!res) break;
|
|
||||||
|
|
||||||
double timestamp = ni * T;
|
|
||||||
|
|
||||||
SLAM.TrackMonocular(im, timestamp);
|
|
||||||
|
|
||||||
//usleep(int(T * 1e6));
|
|
||||||
}
|
|
||||||
|
|
||||||
SLAM.Shutdown();
|
|
||||||
SaveKeyFrameTrajectory(SLAM.GetMap(), "KeyFrameTrajectory.txt", "MapPoints.txt");
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
|
@ -1,152 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import sys
|
|
||||||
|
|
||||||
import numpy as np
|
|
||||||
import cv2
|
|
||||||
|
|
||||||
|
|
||||||
class Calibrator:
|
|
||||||
"""Camera calibration using a chessboard pattern."""
|
|
||||||
|
|
||||||
def __init__(self, pattern_width, pattern_height, motion_threshold=0.05):
|
|
||||||
"""Init the calibrator.
|
|
||||||
|
|
||||||
The parameter motion_threshold determines the minimal motion required
|
|
||||||
to add a new frame to the calibration data, as a ratio of image width.
|
|
||||||
"""
|
|
||||||
self.pattern_size = (pattern_width, pattern_height)
|
|
||||||
self.motion_threshold = motion_threshold
|
|
||||||
self.pattern_points = np.array([
|
|
||||||
(i, j, 0.0)
|
|
||||||
for j in range(pattern_height)
|
|
||||||
for i in range(pattern_width)
|
|
||||||
], dtype=np.float32)
|
|
||||||
self.object_points = []
|
|
||||||
self.image_points = []
|
|
||||||
|
|
||||||
def process_image(self, image, window_name):
|
|
||||||
"""Find corners of an image and store them internally."""
|
|
||||||
if len(image.shape) == 3:
|
|
||||||
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
|
|
||||||
else:
|
|
||||||
gray = image
|
|
||||||
|
|
||||||
h, w = gray.shape
|
|
||||||
self.image_size = (w, h)
|
|
||||||
|
|
||||||
found, corners = cv2.findChessboardCorners(gray, self.pattern_size)
|
|
||||||
|
|
||||||
if found:
|
|
||||||
term = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_COUNT, 30, 0.1)
|
|
||||||
cv2.cornerSubPix(gray, corners, (5, 5), (-1, -1), term)
|
|
||||||
self._add_points(corners.reshape(-1, 2))
|
|
||||||
|
|
||||||
if window_name:
|
|
||||||
cv2.drawChessboardCorners(image, self.pattern_size, corners, found)
|
|
||||||
cv2.imshow(window_name, image)
|
|
||||||
|
|
||||||
return found
|
|
||||||
|
|
||||||
def calibrate(self):
|
|
||||||
"""Run calibration using points extracted by process_image."""
|
|
||||||
rms, camera_matrix, dist_coefs, rvecs, tvecs = cv2.calibrateCamera(
|
|
||||||
self.object_points, self.image_points, self.image_size, None, None)
|
|
||||||
return rms, camera_matrix, dist_coefs.ravel()
|
|
||||||
|
|
||||||
def _add_points(self, image_points):
|
|
||||||
if self.image_points:
|
|
||||||
delta = np.fabs(image_points - self.image_points[-1]).max()
|
|
||||||
should_add = (delta > self.image_size[0] * self.motion_threshold)
|
|
||||||
else:
|
|
||||||
should_add = True
|
|
||||||
|
|
||||||
if should_add:
|
|
||||||
self.image_points.append(image_points)
|
|
||||||
self.object_points.append(self.pattern_points)
|
|
||||||
|
|
||||||
|
|
||||||
def video_frames(filename):
|
|
||||||
"""Yield frames in a video."""
|
|
||||||
cap = cv2.VideoCapture(args.video)
|
|
||||||
while True:
|
|
||||||
ret, frame = cap.read()
|
|
||||||
if ret:
|
|
||||||
yield frame
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
cap.release()
|
|
||||||
|
|
||||||
|
|
||||||
def orb_slam_calibration_config(camera_matrix, dist_coefs):
|
|
||||||
"""String with calibration parameters in orb_slam config format."""
|
|
||||||
lines = [
|
|
||||||
"# Camera calibration and distortion parameters (OpenCV)",
|
|
||||||
"Camera.fx: {}".format(camera_matrix[0, 0]),
|
|
||||||
"Camera.fy: {}".format(camera_matrix[1, 1]),
|
|
||||||
"Camera.cx: {}".format(camera_matrix[0, 2]),
|
|
||||||
"Camera.cy: {}".format(camera_matrix[1, 2]),
|
|
||||||
"",
|
|
||||||
"Camera.k1: {}".format(dist_coefs[0]),
|
|
||||||
"Camera.k2: {}".format(dist_coefs[1]),
|
|
||||||
"Camera.p1: {}".format(dist_coefs[2]),
|
|
||||||
"Camera.p2: {}".format(dist_coefs[3]),
|
|
||||||
"Camera.k3: {}".format(dist_coefs[4]),
|
|
||||||
]
|
|
||||||
return "\n".join(lines)
|
|
||||||
|
|
||||||
|
|
||||||
def parse_arguments():
|
|
||||||
parser = argparse.ArgumentParser(
|
|
||||||
description="Camera calibration from video of a chessboard.")
|
|
||||||
parser.add_argument(
|
|
||||||
'video',
|
|
||||||
help="video of the checkerboard")
|
|
||||||
parser.add_argument(
|
|
||||||
'--output',
|
|
||||||
default='calibration',
|
|
||||||
help="base name for the output files")
|
|
||||||
parser.add_argument(
|
|
||||||
'--size',
|
|
||||||
default='8x6',
|
|
||||||
help="size of the chessboard")
|
|
||||||
parser.add_argument(
|
|
||||||
'--visual',
|
|
||||||
action='store_true',
|
|
||||||
help="display images while calibrating")
|
|
||||||
return parser.parse_args()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
args = parse_arguments()
|
|
||||||
|
|
||||||
pattern_size = [int(i) for i in args.size.split('x')]
|
|
||||||
calibrator = Calibrator(pattern_size[0], pattern_size[1])
|
|
||||||
|
|
||||||
window_name = None
|
|
||||||
if args.visual:
|
|
||||||
window_name = 'Chessboard detection'
|
|
||||||
cv2.namedWindow(window_name, cv2.WINDOW_NORMAL)
|
|
||||||
|
|
||||||
print "kept\tcurrent\tchessboard found"
|
|
||||||
|
|
||||||
for i, frame in enumerate(video_frames(args.video)):
|
|
||||||
found = calibrator.process_image(frame, window_name)
|
|
||||||
|
|
||||||
print "{}\t{}\t{} \r".format(
|
|
||||||
len(calibrator.image_points), i, found),
|
|
||||||
sys.stdout.flush()
|
|
||||||
|
|
||||||
if args.visual:
|
|
||||||
if cv2.waitKey(1) & 0xFF == ord('q'):
|
|
||||||
break
|
|
||||||
|
|
||||||
cv2.destroyAllWindows()
|
|
||||||
|
|
||||||
rms, camera_matrix, dist_coefs = calibrator.calibrate()
|
|
||||||
|
|
||||||
print
|
|
||||||
print "RMS:", rms
|
|
||||||
print
|
|
||||||
print orb_slam_calibration_config(camera_matrix, dist_coefs)
|
|
|
@ -1,196 +0,0 @@
|
||||||
import argparse
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import yaml
|
|
||||||
|
|
||||||
import cv2
|
|
||||||
import numpy as np
|
|
||||||
from opensfm import transformations as tf
|
|
||||||
from opensfm.io import mkdir_p
|
|
||||||
|
|
||||||
|
|
||||||
SCALE = 50
|
|
||||||
|
|
||||||
|
|
||||||
def parse_orb_slam2_config_file(filename):
|
|
||||||
'''
|
|
||||||
Parse ORB_SLAM2 config file.
|
|
||||||
|
|
||||||
Parsing manually since neither pyyaml nor cv2.FileStorage seem to work.
|
|
||||||
'''
|
|
||||||
res = {}
|
|
||||||
with open(filename) as fin:
|
|
||||||
lines = fin.readlines()
|
|
||||||
|
|
||||||
for line in lines:
|
|
||||||
line = line.strip()
|
|
||||||
if line and line[0] != '#' and ':' in line:
|
|
||||||
key, value = line.split(':')
|
|
||||||
res[key.strip()] = value.strip()
|
|
||||||
return res
|
|
||||||
|
|
||||||
|
|
||||||
def camera_from_config(video_filename, config_filename):
|
|
||||||
'''
|
|
||||||
Creates an OpenSfM from an ORB_SLAM2 config
|
|
||||||
'''
|
|
||||||
config = parse_orb_slam2_config_file(config_filename)
|
|
||||||
fx = float(config['Camera.fx'])
|
|
||||||
fy = float(config['Camera.fy'])
|
|
||||||
cx = float(config['Camera.cx'])
|
|
||||||
cy = float(config['Camera.cy'])
|
|
||||||
k1 = float(config['Camera.k1'])
|
|
||||||
k2 = float(config['Camera.k2'])
|
|
||||||
p1 = float(config['Camera.p1'])
|
|
||||||
p2 = float(config['Camera.p2'])
|
|
||||||
width, height = get_video_size(video_filename)
|
|
||||||
size = max(width, height)
|
|
||||||
return {
|
|
||||||
'width': width,
|
|
||||||
'height': height,
|
|
||||||
'focal': np.sqrt(fx * fy) / size,
|
|
||||||
'k1': k1,
|
|
||||||
'k2': k2
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def shot_id_from_timestamp(timestamp):
|
|
||||||
T = 0.1 # TODO(pau) get this from config
|
|
||||||
i = int(round(timestamp / T))
|
|
||||||
return 'frame{0:06d}.png'.format(i)
|
|
||||||
|
|
||||||
|
|
||||||
def shots_from_trajectory(trajectory_filename):
|
|
||||||
'''
|
|
||||||
Create opensfm shots from an orb_slam2/TUM trajectory
|
|
||||||
'''
|
|
||||||
shots = {}
|
|
||||||
with open(trajectory_filename) as fin:
|
|
||||||
lines = fin.readlines()
|
|
||||||
|
|
||||||
for line in lines:
|
|
||||||
a = map(float, line.split())
|
|
||||||
timestamp = a[0]
|
|
||||||
c = np.array(a[1:4])
|
|
||||||
q = np.array(a[4:8])
|
|
||||||
R = tf.quaternion_matrix([q[3], q[0], q[1], q[2]])[:3, :3].T
|
|
||||||
t = -R.dot(c) * SCALE
|
|
||||||
shot = {
|
|
||||||
'camera': 'slamcam',
|
|
||||||
'rotation': list(cv2.Rodrigues(R)[0].flat),
|
|
||||||
'translation': list(t.flat),
|
|
||||||
'created_at': timestamp,
|
|
||||||
}
|
|
||||||
shots[shot_id_from_timestamp(timestamp)] = shot
|
|
||||||
return shots
|
|
||||||
|
|
||||||
|
|
||||||
def points_from_map_points(filename):
|
|
||||||
points = {}
|
|
||||||
with open(filename) as fin:
|
|
||||||
lines = fin.readlines()
|
|
||||||
|
|
||||||
for line in lines:
|
|
||||||
words = line.split()
|
|
||||||
point_id = words[1]
|
|
||||||
coords = map(float, words[2:5])
|
|
||||||
coords = [SCALE * i for i in coords]
|
|
||||||
points[point_id] = {
|
|
||||||
'coordinates': coords,
|
|
||||||
'color': [100, 0, 200]
|
|
||||||
}
|
|
||||||
|
|
||||||
return points
|
|
||||||
|
|
||||||
|
|
||||||
def tracks_from_map_points(filename):
|
|
||||||
tracks = []
|
|
||||||
with open(filename) as fin:
|
|
||||||
lines = fin.readlines()
|
|
||||||
|
|
||||||
for line in lines:
|
|
||||||
words = line.split()
|
|
||||||
timestamp = float(words[0])
|
|
||||||
shot_id = shot_id_from_timestamp(timestamp)
|
|
||||||
point_id = words[1]
|
|
||||||
row = [shot_id, point_id, point_id, '0', '0', '0', '0', '0']
|
|
||||||
tracks.append('\t'.join(row))
|
|
||||||
|
|
||||||
return '\n'.join(tracks)
|
|
||||||
|
|
||||||
|
|
||||||
def get_video_size(video):
|
|
||||||
cap = cv2.VideoCapture(video)
|
|
||||||
width = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH))
|
|
||||||
height = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT))
|
|
||||||
cap.release()
|
|
||||||
return width, height
|
|
||||||
|
|
||||||
|
|
||||||
def extract_keyframes_from_video(video, reconstruction):
|
|
||||||
'''
|
|
||||||
Reads video and extracts a frame for each shot in reconstruction
|
|
||||||
'''
|
|
||||||
image_path = 'images'
|
|
||||||
mkdir_p(image_path)
|
|
||||||
T = 0.1 # TODO(pau) get this from config
|
|
||||||
cap = cv2.VideoCapture(video)
|
|
||||||
video_idx = 0
|
|
||||||
|
|
||||||
shot_ids = sorted(reconstruction['shots'].keys())
|
|
||||||
for shot_id in shot_ids:
|
|
||||||
shot = reconstruction['shots'][shot_id]
|
|
||||||
timestamp = shot['created_at']
|
|
||||||
keyframe_idx = int(round(timestamp / T))
|
|
||||||
|
|
||||||
while video_idx <= keyframe_idx:
|
|
||||||
for i in range(20):
|
|
||||||
ret, frame = cap.read()
|
|
||||||
if ret:
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
print 'retrying'
|
|
||||||
if not ret:
|
|
||||||
raise RuntimeError(
|
|
||||||
'Cound not find keyframe {} in video'.format(shot_id))
|
|
||||||
if video_idx == keyframe_idx:
|
|
||||||
cv2.imwrite(os.path.join(image_path, shot_id), frame)
|
|
||||||
video_idx += 1
|
|
||||||
|
|
||||||
cap.release()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
parser = argparse.ArgumentParser(
|
|
||||||
description='Convert ORB_SLAM2 output to OpenSfM')
|
|
||||||
parser.add_argument(
|
|
||||||
'video',
|
|
||||||
help='the tracked video file')
|
|
||||||
parser.add_argument(
|
|
||||||
'trajectory',
|
|
||||||
help='the trajectory file')
|
|
||||||
parser.add_argument(
|
|
||||||
'points',
|
|
||||||
help='the map points file')
|
|
||||||
parser.add_argument(
|
|
||||||
'config',
|
|
||||||
help='config file with camera calibration')
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
r = {
|
|
||||||
'cameras': {},
|
|
||||||
'shots': {},
|
|
||||||
'points': {},
|
|
||||||
}
|
|
||||||
|
|
||||||
r['cameras']['slamcam'] = camera_from_config(args.video, args.config)
|
|
||||||
r['shots'] = shots_from_trajectory(args.trajectory)
|
|
||||||
r['points'] = points_from_map_points(args.points)
|
|
||||||
tracks = tracks_from_map_points(args.points)
|
|
||||||
|
|
||||||
with open('reconstruction.json', 'w') as fout:
|
|
||||||
json.dump([r], fout, indent=4)
|
|
||||||
with open('tracks.csv', 'w') as fout:
|
|
||||||
fout.write(tracks)
|
|
||||||
|
|
||||||
extract_keyframes_from_video(args.video, r)
|
|
|
@ -1,53 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import os
|
|
||||||
|
|
||||||
import cv2
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
import opensfm.dataset as dataset
|
|
||||||
import opensfm.io as io
|
|
||||||
|
|
||||||
|
|
||||||
def opencv_calibration_matrix(width, height, focal):
|
|
||||||
'''Calibration matrix as used by OpenCV and PMVS
|
|
||||||
'''
|
|
||||||
f = focal * max(width, height)
|
|
||||||
return np.matrix([[f, 0, 0.5 * (width - 1)],
|
|
||||||
[0, f, 0.5 * (height - 1)],
|
|
||||||
[0, 0, 1.0]])
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
parser = argparse.ArgumentParser(description='Undistort images')
|
|
||||||
parser.add_argument('dataset', help='path to the dataset to be processed')
|
|
||||||
parser.add_argument('--output', help='output folder for the undistorted images')
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
data = dataset.DataSet(args.dataset)
|
|
||||||
if args.output:
|
|
||||||
output_path = args.output
|
|
||||||
else:
|
|
||||||
output_path = os.path.join(data.data_path, 'undistorted')
|
|
||||||
|
|
||||||
print "Undistorting images from dataset [%s] to dir [%s]" % (data.data_path, output_path)
|
|
||||||
|
|
||||||
io.mkdir_p(output_path)
|
|
||||||
|
|
||||||
reconstructions = data.load_reconstruction()
|
|
||||||
for h, reconstruction in enumerate(reconstructions):
|
|
||||||
print "undistorting reconstruction", h
|
|
||||||
for image in reconstruction['shots']:
|
|
||||||
print "undistorting image", image
|
|
||||||
shot = reconstruction["shots"][image]
|
|
||||||
|
|
||||||
original_image = data.image_as_array(image)[:,:,::-1]
|
|
||||||
camera = reconstruction['cameras'][shot['camera']]
|
|
||||||
original_h, original_w = original_image.shape[:2]
|
|
||||||
K = opencv_calibration_matrix(original_w, original_h, camera['focal'])
|
|
||||||
k1 = camera["k1"]
|
|
||||||
k2 = camera["k2"]
|
|
||||||
undistorted_image = cv2.undistort(original_image, K, np.array([k1, k2, 0, 0]))
|
|
||||||
|
|
||||||
new_image_path = os.path.join(output_path, image.split('/')[-1])
|
|
||||||
cv2.imwrite(new_image_path, undistorted_image)
|
|
|
@ -24,7 +24,7 @@ def get_max_memory_mb(minimum = 100, use_at_most = 0.5):
|
||||||
"""
|
"""
|
||||||
return max(minimum, (virtual_memory().available / 1024 / 1024) * use_at_most)
|
return max(minimum, (virtual_memory().available / 1024 / 1024) * use_at_most)
|
||||||
|
|
||||||
def parallel_map(func, items, max_workers=1):
|
def parallel_map(func, items, max_workers=1, single_thread_fallback=True):
|
||||||
"""
|
"""
|
||||||
Our own implementation for parallel processing
|
Our own implementation for parallel processing
|
||||||
which handles gracefully CTRL+C and reverts to
|
which handles gracefully CTRL+C and reverts to
|
||||||
|
@ -85,7 +85,7 @@ def parallel_map(func, items, max_workers=1):
|
||||||
|
|
||||||
stop_workers()
|
stop_workers()
|
||||||
|
|
||||||
if error is not None:
|
if error is not None and single_thread_fallback:
|
||||||
# Try to reprocess using a single thread
|
# Try to reprocess using a single thread
|
||||||
# in case this was a memory error
|
# in case this was a memory error
|
||||||
log.ODM_WARNING("Failed to run process in parallel, retrying with a single thread...")
|
log.ODM_WARNING("Failed to run process in parallel, retrying with a single thread...")
|
||||||
|
|
|
@ -156,6 +156,15 @@ def config(argv=None, parser=None):
|
||||||
help=('Set feature extraction quality. Higher quality generates better features, but requires more memory and takes longer. '
|
help=('Set feature extraction quality. Higher quality generates better features, but requires more memory and takes longer. '
|
||||||
'Can be one of: %(choices)s. Default: '
|
'Can be one of: %(choices)s. Default: '
|
||||||
'%(default)s'))
|
'%(default)s'))
|
||||||
|
|
||||||
|
parser.add_argument('--matcher-type',
|
||||||
|
metavar='<string>',
|
||||||
|
action=StoreValue,
|
||||||
|
default='flann',
|
||||||
|
choices=['flann', 'bow'],
|
||||||
|
help=('Matcher algorithm, Fast Library for Approximate Nearest Neighbors or Bag of Words. FLANN is slower, but more stable. BOW is faster, but can sometimes miss valid matches. '
|
||||||
|
'Can be one of: %(choices)s. Default: '
|
||||||
|
'%(default)s'))
|
||||||
|
|
||||||
parser.add_argument('--matcher-neighbors',
|
parser.add_argument('--matcher-neighbors',
|
||||||
metavar='<integer>',
|
metavar='<integer>',
|
||||||
|
@ -464,13 +473,6 @@ def config(argv=None, parser=None):
|
||||||
'[none, gauss_damping, gauss_clamping]. Default: '
|
'[none, gauss_damping, gauss_clamping]. Default: '
|
||||||
'%(default)s'))
|
'%(default)s'))
|
||||||
|
|
||||||
parser.add_argument('--texturing-skip-visibility-test',
|
|
||||||
action=StoreTrue,
|
|
||||||
nargs=0,
|
|
||||||
default=False,
|
|
||||||
help=('Skip geometric visibility test. Default: '
|
|
||||||
' %(default)s'))
|
|
||||||
|
|
||||||
parser.add_argument('--texturing-skip-global-seam-leveling',
|
parser.add_argument('--texturing-skip-global-seam-leveling',
|
||||||
action=StoreTrue,
|
action=StoreTrue,
|
||||||
nargs=0,
|
nargs=0,
|
||||||
|
@ -484,20 +486,6 @@ def config(argv=None, parser=None):
|
||||||
default=False,
|
default=False,
|
||||||
help='Skip local seam blending. Default: %(default)s')
|
help='Skip local seam blending. Default: %(default)s')
|
||||||
|
|
||||||
parser.add_argument('--texturing-skip-hole-filling',
|
|
||||||
action=StoreTrue,
|
|
||||||
nargs=0,
|
|
||||||
default=False,
|
|
||||||
help=('Skip filling of holes in the mesh. Default: '
|
|
||||||
' %(default)s'))
|
|
||||||
|
|
||||||
parser.add_argument('--texturing-keep-unseen-faces',
|
|
||||||
action=StoreTrue,
|
|
||||||
nargs=0,
|
|
||||||
default=False,
|
|
||||||
help=('Keep faces in the mesh that are not seen in any camera. '
|
|
||||||
'Default: %(default)s'))
|
|
||||||
|
|
||||||
parser.add_argument('--texturing-tone-mapping',
|
parser.add_argument('--texturing-tone-mapping',
|
||||||
metavar='<string>',
|
metavar='<string>',
|
||||||
action=StoreValue,
|
action=StoreValue,
|
||||||
|
@ -755,6 +743,15 @@ def config(argv=None, parser=None):
|
||||||
'points will be re-classified and gaps will be filled. Useful for generating DTMs. '
|
'points will be re-classified and gaps will be filled. Useful for generating DTMs. '
|
||||||
'Default: %(default)s'))
|
'Default: %(default)s'))
|
||||||
|
|
||||||
|
parser.add_argument('--primary-band',
|
||||||
|
metavar='<string>',
|
||||||
|
action=StoreValue,
|
||||||
|
default="auto",
|
||||||
|
type=str,
|
||||||
|
help=('When processing multispectral datasets, you can specify the name of the primary band that will be used for reconstruction. '
|
||||||
|
'It\'s recommended to choose a band which has sharp details and is in focus. '
|
||||||
|
'Default: %(default)s'))
|
||||||
|
|
||||||
args = parser.parse_args(argv)
|
args = parser.parse_args(argv)
|
||||||
|
|
||||||
# check that the project path setting has been set properly
|
# check that the project path setting has been set properly
|
||||||
|
|
|
@ -214,12 +214,14 @@ def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56']
|
||||||
# so we need to convert to GeoTIFF first.
|
# so we need to convert to GeoTIFF first.
|
||||||
run('gdal_translate '
|
run('gdal_translate '
|
||||||
'-co NUM_THREADS={threads} '
|
'-co NUM_THREADS={threads} '
|
||||||
|
'-co BIGTIFF=IF_SAFER '
|
||||||
'--config GDAL_CACHEMAX {max_memory}% '
|
'--config GDAL_CACHEMAX {max_memory}% '
|
||||||
'{tiles_vrt} {geotiff_tmp}'.format(**kwargs))
|
'{tiles_vrt} {geotiff_tmp}'.format(**kwargs))
|
||||||
|
|
||||||
# Scale to 10% size
|
# Scale to 10% size
|
||||||
run('gdal_translate '
|
run('gdal_translate '
|
||||||
'-co NUM_THREADS={threads} '
|
'-co NUM_THREADS={threads} '
|
||||||
|
'-co BIGTIFF=IF_SAFER '
|
||||||
'--config GDAL_CACHEMAX {max_memory}% '
|
'--config GDAL_CACHEMAX {max_memory}% '
|
||||||
'-outsize 10% 0 '
|
'-outsize 10% 0 '
|
||||||
'{geotiff_tmp} {geotiff_small}'.format(**kwargs))
|
'{geotiff_tmp} {geotiff_small}'.format(**kwargs))
|
||||||
|
@ -227,6 +229,7 @@ def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56']
|
||||||
# Fill scaled
|
# Fill scaled
|
||||||
run('gdal_fillnodata.py '
|
run('gdal_fillnodata.py '
|
||||||
'-co NUM_THREADS={threads} '
|
'-co NUM_THREADS={threads} '
|
||||||
|
'-co BIGTIFF=IF_SAFER '
|
||||||
'--config GDAL_CACHEMAX {max_memory}% '
|
'--config GDAL_CACHEMAX {max_memory}% '
|
||||||
'-b 1 '
|
'-b 1 '
|
||||||
'-of GTiff '
|
'-of GTiff '
|
||||||
|
@ -237,6 +240,7 @@ def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56']
|
||||||
run('gdal_translate '
|
run('gdal_translate '
|
||||||
'-co NUM_THREADS={threads} '
|
'-co NUM_THREADS={threads} '
|
||||||
'-co TILED=YES '
|
'-co TILED=YES '
|
||||||
|
'-co BIGTIFF=IF_SAFER '
|
||||||
'-co COMPRESS=DEFLATE '
|
'-co COMPRESS=DEFLATE '
|
||||||
'--config GDAL_CACHEMAX {max_memory}% '
|
'--config GDAL_CACHEMAX {max_memory}% '
|
||||||
'{merged_vrt} {geotiff}'.format(**kwargs))
|
'{merged_vrt} {geotiff}'.format(**kwargs))
|
||||||
|
@ -244,6 +248,7 @@ def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56']
|
||||||
run('gdal_translate '
|
run('gdal_translate '
|
||||||
'-co NUM_THREADS={threads} '
|
'-co NUM_THREADS={threads} '
|
||||||
'-co TILED=YES '
|
'-co TILED=YES '
|
||||||
|
'-co BIGTIFF=IF_SAFER '
|
||||||
'-co COMPRESS=DEFLATE '
|
'-co COMPRESS=DEFLATE '
|
||||||
'--config GDAL_CACHEMAX {max_memory}% '
|
'--config GDAL_CACHEMAX {max_memory}% '
|
||||||
'{tiles_vrt} {geotiff}'.format(**kwargs))
|
'{tiles_vrt} {geotiff}'.format(**kwargs))
|
||||||
|
|
|
@ -19,24 +19,15 @@ def build(input_point_cloud_files, output_path, max_concurrency=8, rerun=False):
|
||||||
shutil.rmtree(output_path)
|
shutil.rmtree(output_path)
|
||||||
|
|
||||||
kwargs = {
|
kwargs = {
|
||||||
'threads': max_concurrency,
|
# 'threads': max_concurrency,
|
||||||
'tmpdir': tmpdir,
|
'tmpdir': tmpdir,
|
||||||
'all_inputs': "-i " + " ".join(map(quote, input_point_cloud_files)),
|
'files': "--files " + " ".join(map(quote, input_point_cloud_files)),
|
||||||
'outputdir': output_path
|
'outputdir': output_path
|
||||||
}
|
}
|
||||||
|
|
||||||
# Run scan to compute dataset bounds
|
# Run untwine
|
||||||
system.run('entwine scan --threads {threads} --tmp "{tmpdir}" {all_inputs} -o "{outputdir}"'.format(**kwargs))
|
system.run('untwine --temp_dir "{tmpdir}" {files} --output_dir "{outputdir}"'.format(**kwargs))
|
||||||
scan_json = os.path.join(output_path, "scan.json")
|
|
||||||
|
|
||||||
if os.path.exists(scan_json):
|
# Cleanup
|
||||||
kwargs['input'] = scan_json
|
|
||||||
for _ in range(num_files):
|
|
||||||
# One at a time
|
|
||||||
system.run('entwine build --threads {threads} --tmp "{tmpdir}" -i "{input}" -o "{outputdir}" --run 1'.format(**kwargs))
|
|
||||||
else:
|
|
||||||
log.ODM_WARNING("%s does not exist, no point cloud will be built." % scan_json)
|
|
||||||
|
|
||||||
|
|
||||||
if os.path.exists(tmpdir):
|
if os.path.exists(tmpdir):
|
||||||
shutil.rmtree(tmpdir)
|
shutil.rmtree(tmpdir)
|
|
@ -50,7 +50,7 @@ class GeoFile:
|
||||||
horizontal_accuracy, vertical_accuracy,
|
horizontal_accuracy, vertical_accuracy,
|
||||||
extras)
|
extras)
|
||||||
else:
|
else:
|
||||||
logger.warning("Malformed geo line: %s" % line)
|
log.ODM_WARNING("Malformed geo line: %s" % line)
|
||||||
|
|
||||||
def get_entry(self, filename):
|
def get_entry(self, filename):
|
||||||
return self.entries.get(filename)
|
return self.entries.get(filename)
|
||||||
|
|
|
@ -1,7 +1,16 @@
|
||||||
from opendm import dls
|
|
||||||
import math
|
import math
|
||||||
|
import re
|
||||||
|
import cv2
|
||||||
|
import os
|
||||||
|
from opendm import dls
|
||||||
import numpy as np
|
import numpy as np
|
||||||
from opendm import log
|
from opendm import log
|
||||||
|
from opendm.concurrency import parallel_map
|
||||||
|
from opensfm.io import imread
|
||||||
|
|
||||||
|
from skimage import exposure
|
||||||
|
from skimage.morphology import disk
|
||||||
|
from skimage.filters import rank, gaussian
|
||||||
|
|
||||||
# Loosely based on https://github.com/micasense/imageprocessing/blob/master/micasense/utils.py
|
# Loosely based on https://github.com/micasense/imageprocessing/blob/master/micasense/utils.py
|
||||||
|
|
||||||
|
@ -150,4 +159,342 @@ def compute_irradiance(photo, use_sun_sensor=True):
|
||||||
elif use_sun_sensor:
|
elif use_sun_sensor:
|
||||||
log.ODM_WARNING("No sun sensor values found for %s" % photo.filename)
|
log.ODM_WARNING("No sun sensor values found for %s" % photo.filename)
|
||||||
|
|
||||||
return 1.0
|
return 1.0
|
||||||
|
|
||||||
|
def get_photos_by_band(multi_camera, user_band_name):
|
||||||
|
band_name = get_primary_band_name(multi_camera, user_band_name)
|
||||||
|
|
||||||
|
for band in multi_camera:
|
||||||
|
if band['name'] == band_name:
|
||||||
|
return band['photos']
|
||||||
|
|
||||||
|
|
||||||
|
def get_primary_band_name(multi_camera, user_band_name):
|
||||||
|
if len(multi_camera) < 1:
|
||||||
|
raise Exception("Invalid multi_camera list")
|
||||||
|
|
||||||
|
# multi_camera is already sorted by band_index
|
||||||
|
if user_band_name == "auto":
|
||||||
|
return multi_camera[0]['name']
|
||||||
|
|
||||||
|
for band in multi_camera:
|
||||||
|
if band['name'].lower() == user_band_name.lower():
|
||||||
|
return band['name']
|
||||||
|
|
||||||
|
band_name_fallback = multi_camera[0]['name']
|
||||||
|
|
||||||
|
log.ODM_WARNING("Cannot find band name \"%s\", will use \"%s\" instead" % (user_band_name, band_name_fallback))
|
||||||
|
return band_name_fallback
|
||||||
|
|
||||||
|
|
||||||
|
def compute_band_maps(multi_camera, primary_band):
|
||||||
|
"""
|
||||||
|
Computes maps of:
|
||||||
|
- { photo filename --> associated primary band photo } (s2p)
|
||||||
|
- { primary band filename --> list of associated secondary band photos } (p2s)
|
||||||
|
by looking at capture time or filenames as a fallback
|
||||||
|
"""
|
||||||
|
band_name = get_primary_band_name(multi_camera, primary_band)
|
||||||
|
primary_band_photos = None
|
||||||
|
for band in multi_camera:
|
||||||
|
if band['name'] == band_name:
|
||||||
|
primary_band_photos = band['photos']
|
||||||
|
break
|
||||||
|
|
||||||
|
# Try using capture time as the grouping factor
|
||||||
|
try:
|
||||||
|
capture_time_map = {}
|
||||||
|
s2p = {}
|
||||||
|
p2s = {}
|
||||||
|
|
||||||
|
for p in primary_band_photos:
|
||||||
|
t = p.get_utc_time()
|
||||||
|
if t is None:
|
||||||
|
raise Exception("Cannot use capture time (no information in %s)" % p.filename)
|
||||||
|
|
||||||
|
# Should be unique across primary band
|
||||||
|
if capture_time_map.get(t) is not None:
|
||||||
|
raise Exception("Unreliable capture time detected (duplicate)")
|
||||||
|
|
||||||
|
capture_time_map[t] = p
|
||||||
|
|
||||||
|
for band in multi_camera:
|
||||||
|
photos = band['photos']
|
||||||
|
|
||||||
|
for p in photos:
|
||||||
|
t = p.get_utc_time()
|
||||||
|
if t is None:
|
||||||
|
raise Exception("Cannot use capture time (no information in %s)" % p.filename)
|
||||||
|
|
||||||
|
# Should match the primary band
|
||||||
|
if capture_time_map.get(t) is None:
|
||||||
|
raise Exception("Unreliable capture time detected (no primary band match)")
|
||||||
|
|
||||||
|
s2p[p.filename] = capture_time_map[t]
|
||||||
|
|
||||||
|
if band['name'] != band_name:
|
||||||
|
p2s.setdefault(capture_time_map[t].filename, []).append(p)
|
||||||
|
|
||||||
|
return s2p, p2s
|
||||||
|
except Exception as e:
|
||||||
|
# Fallback on filename conventions
|
||||||
|
log.ODM_WARNING("%s, will use filenames instead" % str(e))
|
||||||
|
|
||||||
|
filename_map = {}
|
||||||
|
s2p = {}
|
||||||
|
p2s = {}
|
||||||
|
file_regex = re.compile(r"^(.+)[-_]\w+(\.[A-Za-z]{3,4})$")
|
||||||
|
|
||||||
|
for p in primary_band_photos:
|
||||||
|
filename_without_band = re.sub(file_regex, "\\1\\2", p.filename)
|
||||||
|
|
||||||
|
# Quick check
|
||||||
|
if filename_without_band == p.filename:
|
||||||
|
raise Exception("Cannot match bands by filename on %s, make sure to name your files [filename]_band[.ext] uniformly." % p.filename)
|
||||||
|
|
||||||
|
filename_map[filename_without_band] = p
|
||||||
|
|
||||||
|
for band in multi_camera:
|
||||||
|
photos = band['photos']
|
||||||
|
|
||||||
|
for p in photos:
|
||||||
|
filename_without_band = re.sub(file_regex, "\\1\\2", p.filename)
|
||||||
|
|
||||||
|
# Quick check
|
||||||
|
if filename_without_band == p.filename:
|
||||||
|
raise Exception("Cannot match bands by filename on %s, make sure to name your files [filename]_band[.ext] uniformly." % p.filename)
|
||||||
|
|
||||||
|
s2p[p.filename] = filename_map[filename_without_band]
|
||||||
|
|
||||||
|
if band['name'] != band_name:
|
||||||
|
p2s.setdefault(filename_map[filename_without_band].filename, []).append(p)
|
||||||
|
|
||||||
|
return s2p, p2s
|
||||||
|
|
||||||
|
def compute_alignment_matrices(multi_camera, primary_band_name, images_path, s2p, p2s, max_concurrency=1, max_samples=30):
|
||||||
|
log.ODM_INFO("Computing band alignment")
|
||||||
|
|
||||||
|
alignment_info = {}
|
||||||
|
|
||||||
|
# For each secondary band
|
||||||
|
for band in multi_camera:
|
||||||
|
if band['name'] != primary_band_name:
|
||||||
|
matrices = []
|
||||||
|
|
||||||
|
def parallel_compute_homography(p):
|
||||||
|
try:
|
||||||
|
if len(matrices) >= max_samples:
|
||||||
|
# log.ODM_INFO("Got enough samples for %s (%s)" % (band['name'], max_samples))
|
||||||
|
return
|
||||||
|
|
||||||
|
# Find good matrix candidates for alignment
|
||||||
|
|
||||||
|
primary_band_photo = s2p.get(p['filename'])
|
||||||
|
if primary_band_photo is None:
|
||||||
|
log.ODM_WARNING("Cannot find primary band photo for %s" % p['filename'])
|
||||||
|
return
|
||||||
|
|
||||||
|
warp_matrix, dimension, algo = compute_homography(os.path.join(images_path, p['filename']),
|
||||||
|
os.path.join(images_path, primary_band_photo.filename))
|
||||||
|
|
||||||
|
if warp_matrix is not None:
|
||||||
|
log.ODM_INFO("%s --> %s good match" % (p['filename'], primary_band_photo.filename))
|
||||||
|
|
||||||
|
matrices.append({
|
||||||
|
'warp_matrix': warp_matrix,
|
||||||
|
'eigvals': np.linalg.eigvals(warp_matrix),
|
||||||
|
'dimension': dimension,
|
||||||
|
'algo': algo
|
||||||
|
})
|
||||||
|
else:
|
||||||
|
log.ODM_INFO("%s --> %s cannot be matched" % (p['filename'], primary_band_photo.filename))
|
||||||
|
except Exception as e:
|
||||||
|
log.ODM_WARNING("Failed to compute homography for %s: %s" % (p['filename'], str(e)))
|
||||||
|
|
||||||
|
parallel_map(parallel_compute_homography, [{'filename': p.filename} for p in band['photos']], max_concurrency, single_thread_fallback=False)
|
||||||
|
|
||||||
|
# Choose winning algorithm (doesn't seem to yield improvements)
|
||||||
|
# feat_count = 0
|
||||||
|
# ecc_count = 0
|
||||||
|
# for m in matrices:
|
||||||
|
# if m['algo'] == 'feat':
|
||||||
|
# feat_count += 1
|
||||||
|
# if m['algo'] == 'ecc':
|
||||||
|
# ecc_count += 1
|
||||||
|
|
||||||
|
# algo = 'feat' if feat_count >= ecc_count else 'ecc'
|
||||||
|
|
||||||
|
# log.ODM_INFO("Feat: %s | ECC: %s | Winner: %s" % (feat_count, ecc_count, algo))
|
||||||
|
# matrices = [m for m in matrices if m['algo'] == algo]
|
||||||
|
|
||||||
|
# Find the matrix that has the most common eigvals
|
||||||
|
# among all matrices. That should be the "best" alignment.
|
||||||
|
for m1 in matrices:
|
||||||
|
acc = np.array([0.0,0.0,0.0])
|
||||||
|
e = m1['eigvals']
|
||||||
|
|
||||||
|
for m2 in matrices:
|
||||||
|
acc += abs(e - m2['eigvals'])
|
||||||
|
|
||||||
|
m1['score'] = acc.sum()
|
||||||
|
|
||||||
|
# Sort
|
||||||
|
matrices.sort(key=lambda x: x['score'], reverse=False)
|
||||||
|
|
||||||
|
if len(matrices) > 0:
|
||||||
|
alignment_info[band['name']] = matrices[0]
|
||||||
|
log.ODM_INFO("%s band will be aligned using warp matrix %s (score: %s)" % (band['name'], matrices[0]['warp_matrix'], matrices[0]['score']))
|
||||||
|
else:
|
||||||
|
log.ODM_WARNING("Cannot find alignment matrix for band %s, The band will likely be misaligned!" % band['name'])
|
||||||
|
|
||||||
|
return alignment_info
|
||||||
|
|
||||||
|
def compute_homography(image_filename, align_image_filename):
|
||||||
|
try:
|
||||||
|
# Convert images to grayscale if needed
|
||||||
|
image = imread(image_filename, unchanged=True, anydepth=True)
|
||||||
|
if image.shape[2] == 3:
|
||||||
|
image_gray = to_8bit(cv2.cvtColor(image, cv2.COLOR_BGR2GRAY))
|
||||||
|
else:
|
||||||
|
image_gray = to_8bit(image[:,:,0])
|
||||||
|
|
||||||
|
align_image = imread(align_image_filename, unchanged=True, anydepth=True)
|
||||||
|
if align_image.shape[2] == 3:
|
||||||
|
align_image_gray = to_8bit(cv2.cvtColor(align_image, cv2.COLOR_BGR2GRAY))
|
||||||
|
else:
|
||||||
|
align_image_gray = to_8bit(align_image[:,:,0])
|
||||||
|
|
||||||
|
def compute_using(algorithm):
|
||||||
|
h = algorithm(image_gray, align_image_gray)
|
||||||
|
if h is None:
|
||||||
|
return None, (None, None)
|
||||||
|
|
||||||
|
det = np.linalg.det(h)
|
||||||
|
|
||||||
|
# Check #1 homography's determinant will not be close to zero
|
||||||
|
if abs(det) < 0.25:
|
||||||
|
return None, (None, None)
|
||||||
|
|
||||||
|
# Check #2 the ratio of the first-to-last singular value is sane (not too high)
|
||||||
|
svd = np.linalg.svd(h, compute_uv=False)
|
||||||
|
if svd[-1] == 0:
|
||||||
|
return None, (None, None)
|
||||||
|
|
||||||
|
ratio = svd[0] / svd[-1]
|
||||||
|
if ratio > 100000:
|
||||||
|
return None, (None, None)
|
||||||
|
|
||||||
|
return h, (align_image_gray.shape[1], align_image_gray.shape[0])
|
||||||
|
|
||||||
|
algo = 'feat'
|
||||||
|
result = compute_using(find_features_homography)
|
||||||
|
|
||||||
|
if result[0] is None:
|
||||||
|
algo = 'ecc'
|
||||||
|
log.ODM_INFO("Can't use features matching, will use ECC (this might take a bit)")
|
||||||
|
result = compute_using(find_ecc_homography)
|
||||||
|
if result[0] is None:
|
||||||
|
algo = None
|
||||||
|
|
||||||
|
warp_matrix, dimension = result
|
||||||
|
return warp_matrix, dimension, algo
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
log.ODM_WARNING("Compute homography: %s" % str(e))
|
||||||
|
return None, None, (None, None)
|
||||||
|
|
||||||
|
def find_ecc_homography(image_gray, align_image_gray, number_of_iterations=2500, termination_eps=1e-9):
|
||||||
|
image_gray = to_8bit(gradient(gaussian(image_gray)))
|
||||||
|
align_image_gray = to_8bit(gradient(gaussian(align_image_gray)))
|
||||||
|
|
||||||
|
# Define the motion model
|
||||||
|
warp_matrix = np.eye(3, 3, dtype=np.float32)
|
||||||
|
|
||||||
|
# Define termination criteria
|
||||||
|
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT,
|
||||||
|
number_of_iterations, termination_eps)
|
||||||
|
|
||||||
|
_, warp_matrix = cv2.findTransformECC (image_gray,align_image_gray,warp_matrix, cv2.MOTION_HOMOGRAPHY, criteria, inputMask=None, gaussFiltSize=9)
|
||||||
|
|
||||||
|
return warp_matrix
|
||||||
|
|
||||||
|
|
||||||
|
def find_features_homography(image_gray, align_image_gray, feature_retention=0.25):
|
||||||
|
# Detect SIFT features and compute descriptors.
|
||||||
|
detector = cv2.SIFT_create(edgeThreshold=10, contrastThreshold=0.1)
|
||||||
|
kp_image, desc_image = detector.detectAndCompute(image_gray, None)
|
||||||
|
kp_align_image, desc_align_image = detector.detectAndCompute(align_image_gray, None)
|
||||||
|
|
||||||
|
# Match
|
||||||
|
bf = cv2.BFMatcher(cv2.NORM_L1,crossCheck=True)
|
||||||
|
matches = bf.match(desc_image, desc_align_image)
|
||||||
|
|
||||||
|
# Sort by score
|
||||||
|
matches.sort(key=lambda x: x.distance, reverse=False)
|
||||||
|
|
||||||
|
# Remove bad matches
|
||||||
|
num_good_matches = int(len(matches) * feature_retention)
|
||||||
|
matches = matches[:num_good_matches]
|
||||||
|
|
||||||
|
# Debug
|
||||||
|
# imMatches = cv2.drawMatches(im1, kp_image, im2, kp_align_image, matches, None)
|
||||||
|
# cv2.imwrite("matches.jpg", imMatches)
|
||||||
|
|
||||||
|
# Extract location of good matches
|
||||||
|
points_image = np.zeros((len(matches), 2), dtype=np.float32)
|
||||||
|
points_align_image = np.zeros((len(matches), 2), dtype=np.float32)
|
||||||
|
|
||||||
|
for i, match in enumerate(matches):
|
||||||
|
points_image[i, :] = kp_image[match.queryIdx].pt
|
||||||
|
points_align_image[i, :] = kp_align_image[match.trainIdx].pt
|
||||||
|
|
||||||
|
# Find homography
|
||||||
|
h, _ = cv2.findHomography(points_image, points_align_image, cv2.RANSAC)
|
||||||
|
return h
|
||||||
|
|
||||||
|
def gradient(im, ksize=5):
|
||||||
|
im = local_normalize(im)
|
||||||
|
grad_x = cv2.Sobel(im,cv2.CV_32F,1,0,ksize=ksize)
|
||||||
|
grad_y = cv2.Sobel(im,cv2.CV_32F,0,1,ksize=ksize)
|
||||||
|
grad = cv2.addWeighted(np.absolute(grad_x), 0.5, np.absolute(grad_y), 0.5, 0)
|
||||||
|
return grad
|
||||||
|
|
||||||
|
def local_normalize(im):
|
||||||
|
width, _ = im.shape
|
||||||
|
disksize = int(width/5)
|
||||||
|
if disksize % 2 == 0:
|
||||||
|
disksize = disksize + 1
|
||||||
|
selem = disk(disksize)
|
||||||
|
im = rank.equalize(im, selem=selem)
|
||||||
|
return im
|
||||||
|
|
||||||
|
|
||||||
|
def align_image(image, warp_matrix, dimension):
|
||||||
|
if warp_matrix.shape == (3, 3):
|
||||||
|
return cv2.warpPerspective(image, warp_matrix, dimension)
|
||||||
|
else:
|
||||||
|
return cv2.warpAffine(image, warp_matrix, dimension)
|
||||||
|
|
||||||
|
|
||||||
|
def to_8bit(image):
|
||||||
|
if image.dtype == np.uint8:
|
||||||
|
return image
|
||||||
|
|
||||||
|
# Convert to 8bit
|
||||||
|
try:
|
||||||
|
data_range = np.iinfo(image.dtype)
|
||||||
|
value_range = float(data_range.max) - float(data_range.min)
|
||||||
|
except ValueError:
|
||||||
|
# For floats use the actual range of the image values
|
||||||
|
value_range = float(image.max()) - float(image.min())
|
||||||
|
|
||||||
|
image = image.astype(np.float32)
|
||||||
|
image *= 255.0 / value_range
|
||||||
|
np.around(image, out=image)
|
||||||
|
image[image > 255] = 255
|
||||||
|
image[image < 0] = 0
|
||||||
|
image = image.astype(np.uint8)
|
||||||
|
|
||||||
|
return image
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,41 @@
|
||||||
|
import os
|
||||||
|
from opendm import log
|
||||||
|
|
||||||
|
def replace_nvm_images(src_nvm_file, img_map, dst_nvm_file):
|
||||||
|
"""
|
||||||
|
Create a new NVM file from an existing NVM file
|
||||||
|
replacing the image references based on img_map
|
||||||
|
where img_map is a dict { "old_image" --> "new_image" } (filename only).
|
||||||
|
The function does not write the points information (they are discarded)
|
||||||
|
"""
|
||||||
|
|
||||||
|
with open(src_nvm_file) as f:
|
||||||
|
lines = list(map(str.strip, f.read().split("\n")))
|
||||||
|
|
||||||
|
# Quick check
|
||||||
|
if len(lines) < 3 or lines[0] != "NVM_V3" or lines[1].strip() != "":
|
||||||
|
raise Exception("%s does not seem to be a valid NVM file" % src_nvm_file)
|
||||||
|
|
||||||
|
num_images = int(lines[2])
|
||||||
|
entries = []
|
||||||
|
|
||||||
|
for l in lines[3:3+num_images]:
|
||||||
|
image_path, *p = l.split(" ")
|
||||||
|
|
||||||
|
dir_name = os.path.dirname(image_path)
|
||||||
|
file_name = os.path.basename(image_path)
|
||||||
|
|
||||||
|
new_filename = img_map.get(file_name)
|
||||||
|
if new_filename is not None:
|
||||||
|
entries.append("%s %s" % (os.path.join(dir_name, new_filename), " ".join(p)))
|
||||||
|
else:
|
||||||
|
log.ODM_WARNING("Cannot find %s in image map for %s" % (file_name, dst_nvm_file))
|
||||||
|
|
||||||
|
if num_images != len(entries):
|
||||||
|
raise Exception("Cannot write %s, not all band images have been matched" % dst_nvm_file)
|
||||||
|
|
||||||
|
with open(dst_nvm_file, "w") as f:
|
||||||
|
f.write("NVM_V3\n\n%s\n" % len(entries))
|
||||||
|
f.write("\n".join(entries))
|
||||||
|
f.write("\n\n0\n0\n\n0")
|
||||||
|
|
|
@ -15,6 +15,7 @@ from opensfm.large import metadataset
|
||||||
from opensfm.large import tools
|
from opensfm.large import tools
|
||||||
from opensfm.actions import undistort
|
from opensfm.actions import undistort
|
||||||
from opensfm.dataset import DataSet
|
from opensfm.dataset import DataSet
|
||||||
|
from opendm.multispectral import get_photos_by_band
|
||||||
|
|
||||||
class OSFMContext:
|
class OSFMContext:
|
||||||
def __init__(self, opensfm_project_path):
|
def __init__(self, opensfm_project_path):
|
||||||
|
@ -55,7 +56,7 @@ class OSFMContext:
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
|
|
||||||
def setup(self, args, images_path, photos, reconstruction, append_config = [], rerun=False):
|
def setup(self, args, images_path, reconstruction, append_config = [], rerun=False):
|
||||||
"""
|
"""
|
||||||
Setup a OpenSfM project
|
Setup a OpenSfM project
|
||||||
"""
|
"""
|
||||||
|
@ -67,7 +68,15 @@ class OSFMContext:
|
||||||
|
|
||||||
list_path = os.path.join(self.opensfm_project_path, 'image_list.txt')
|
list_path = os.path.join(self.opensfm_project_path, 'image_list.txt')
|
||||||
if not io.file_exists(list_path) or rerun:
|
if not io.file_exists(list_path) or rerun:
|
||||||
|
|
||||||
|
if reconstruction.multi_camera:
|
||||||
|
photos = get_photos_by_band(reconstruction.multi_camera, args.primary_band)
|
||||||
|
if len(photos) < 1:
|
||||||
|
raise Exception("Not enough images in selected band %s" % args.primary_band.lower())
|
||||||
|
log.ODM_INFO("Reconstruction will use %s images from %s band" % (len(photos), args.primary_band.lower()))
|
||||||
|
else:
|
||||||
|
photos = reconstruction.photos
|
||||||
|
|
||||||
# create file list
|
# create file list
|
||||||
has_alt = True
|
has_alt = True
|
||||||
has_gps = False
|
has_gps = False
|
||||||
|
@ -77,6 +86,7 @@ class OSFMContext:
|
||||||
has_alt = False
|
has_alt = False
|
||||||
if photo.latitude is not None and photo.longitude is not None:
|
if photo.latitude is not None and photo.longitude is not None:
|
||||||
has_gps = True
|
has_gps = True
|
||||||
|
|
||||||
fout.write('%s\n' % os.path.join(images_path, photo.filename))
|
fout.write('%s\n' % os.path.join(images_path, photo.filename))
|
||||||
|
|
||||||
# check for image_groups.txt (split-merge)
|
# check for image_groups.txt (split-merge)
|
||||||
|
@ -95,16 +105,9 @@ class OSFMContext:
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.ODM_WARNING("Cannot set camera_models_overrides.json: %s" % str(e))
|
log.ODM_WARNING("Cannot set camera_models_overrides.json: %s" % str(e))
|
||||||
|
|
||||||
use_bow = False
|
use_bow = args.matcher_type == "bow"
|
||||||
feature_type = "SIFT"
|
feature_type = "SIFT"
|
||||||
|
|
||||||
matcher_neighbors = args.matcher_neighbors
|
|
||||||
if matcher_neighbors != 0 and reconstruction.multi_camera is not None:
|
|
||||||
matcher_neighbors *= len(reconstruction.multi_camera)
|
|
||||||
log.ODM_INFO("Increasing matcher neighbors to %s to accomodate multi-camera setup" % matcher_neighbors)
|
|
||||||
log.ODM_INFO("Multi-camera setup, using BOW matching")
|
|
||||||
use_bow = True
|
|
||||||
|
|
||||||
# GPSDOP override if we have GPS accuracy information (such as RTK)
|
# GPSDOP override if we have GPS accuracy information (such as RTK)
|
||||||
if 'gps_accuracy_is_set' in args:
|
if 'gps_accuracy_is_set' in args:
|
||||||
log.ODM_INFO("Forcing GPS DOP to %s for all images" % args.gps_accuracy)
|
log.ODM_INFO("Forcing GPS DOP to %s for all images" % args.gps_accuracy)
|
||||||
|
@ -178,7 +181,7 @@ class OSFMContext:
|
||||||
"feature_process_size: %s" % feature_process_size,
|
"feature_process_size: %s" % feature_process_size,
|
||||||
"feature_min_frames: %s" % args.min_num_features,
|
"feature_min_frames: %s" % args.min_num_features,
|
||||||
"processes: %s" % args.max_concurrency,
|
"processes: %s" % args.max_concurrency,
|
||||||
"matching_gps_neighbors: %s" % matcher_neighbors,
|
"matching_gps_neighbors: %s" % args.matcher_neighbors,
|
||||||
"matching_gps_distance: %s" % args.matcher_distance,
|
"matching_gps_distance: %s" % args.matcher_distance,
|
||||||
"depthmap_method: %s" % args.opensfm_depthmap_method,
|
"depthmap_method: %s" % args.opensfm_depthmap_method,
|
||||||
"depthmap_resolution: %s" % depthmap_resolution,
|
"depthmap_resolution: %s" % depthmap_resolution,
|
||||||
|
@ -188,8 +191,7 @@ class OSFMContext:
|
||||||
"undistorted_image_format: tif",
|
"undistorted_image_format: tif",
|
||||||
"bundle_outlier_filtering_type: AUTO",
|
"bundle_outlier_filtering_type: AUTO",
|
||||||
"align_orientation_prior: vertical",
|
"align_orientation_prior: vertical",
|
||||||
"triangulation_type: ROBUST",
|
"triangulation_type: ROBUST"
|
||||||
"bundle_common_position_constraints: %s" % ('no' if reconstruction.multi_camera is None else 'yes'),
|
|
||||||
]
|
]
|
||||||
|
|
||||||
if args.camera_lens != 'auto':
|
if args.camera_lens != 'auto':
|
||||||
|
@ -313,15 +315,65 @@ class OSFMContext:
|
||||||
else:
|
else:
|
||||||
log.ODM_INFO("Already extracted cameras")
|
log.ODM_INFO("Already extracted cameras")
|
||||||
|
|
||||||
def convert_and_undistort(self, rerun=False, imageFilter=None):
|
def convert_and_undistort(self, rerun=False, imageFilter=None, image_list=None, runId="nominal"):
|
||||||
log.ODM_INFO("Undistorting %s ..." % self.opensfm_project_path)
|
log.ODM_INFO("Undistorting %s ..." % self.opensfm_project_path)
|
||||||
undistorted_images_path = self.path("undistorted", "images")
|
done_flag_file = self.path("undistorted", "%s_done.txt" % runId)
|
||||||
|
|
||||||
if not io.dir_exists(undistorted_images_path) or rerun:
|
if not io.file_exists(done_flag_file) or rerun:
|
||||||
undistort.run_dataset(DataSet(self.opensfm_project_path), "reconstruction.json",
|
ds = DataSet(self.opensfm_project_path)
|
||||||
|
|
||||||
|
if image_list is not None:
|
||||||
|
ds._set_image_list(image_list)
|
||||||
|
|
||||||
|
undistort.run_dataset(ds, "reconstruction.json",
|
||||||
0, None, "undistorted", imageFilter)
|
0, None, "undistorted", imageFilter)
|
||||||
|
|
||||||
|
self.touch(done_flag_file)
|
||||||
else:
|
else:
|
||||||
log.ODM_WARNING("Found an undistorted directory in %s" % undistorted_images_path)
|
log.ODM_WARNING("Already undistorted (%s)" % runId)
|
||||||
|
|
||||||
|
def restore_reconstruction_backup(self):
|
||||||
|
if os.path.exists(self.recon_backup_file()):
|
||||||
|
# This time export the actual reconstruction.json
|
||||||
|
# (containing only the primary band)
|
||||||
|
if os.path.exists(self.recon_file()):
|
||||||
|
os.remove(self.recon_file())
|
||||||
|
os.rename(self.recon_backup_file(), self.recon_file())
|
||||||
|
log.ODM_INFO("Restored reconstruction.json")
|
||||||
|
|
||||||
|
def backup_reconstruction(self):
|
||||||
|
if os.path.exists(self.recon_backup_file()):
|
||||||
|
os.remove(self.recon_backup_file())
|
||||||
|
|
||||||
|
log.ODM_INFO("Backing up reconstruction")
|
||||||
|
shutil.copyfile(self.recon_file(), self.recon_backup_file())
|
||||||
|
|
||||||
|
def recon_backup_file(self):
|
||||||
|
return self.path("reconstruction.backup.json")
|
||||||
|
|
||||||
|
def recon_file(self):
|
||||||
|
return self.path("reconstruction.json")
|
||||||
|
|
||||||
|
def add_shots_to_reconstruction(self, p2s):
|
||||||
|
with open(self.recon_file()) as f:
|
||||||
|
reconstruction = json.loads(f.read())
|
||||||
|
|
||||||
|
# Augment reconstruction.json
|
||||||
|
for recon in reconstruction:
|
||||||
|
shots = recon['shots']
|
||||||
|
sids = list(shots)
|
||||||
|
|
||||||
|
for shot_id in sids:
|
||||||
|
secondary_photos = p2s.get(shot_id)
|
||||||
|
if secondary_photos is None:
|
||||||
|
log.ODM_WARNING("Cannot find secondary photos for %s" % shot_id)
|
||||||
|
continue
|
||||||
|
|
||||||
|
for p in secondary_photos:
|
||||||
|
shots[p.filename] = shots[shot_id]
|
||||||
|
|
||||||
|
with open(self.recon_file(), 'w') as f:
|
||||||
|
f.write(json.dumps(reconstruction))
|
||||||
|
|
||||||
|
|
||||||
def update_config(self, cfg_dict):
|
def update_config(self, cfg_dict):
|
||||||
|
|
|
@ -19,7 +19,7 @@ RUN rm -rf \
|
||||||
/code/SuperBuild/build/opencv \
|
/code/SuperBuild/build/opencv \
|
||||||
/code/SuperBuild/download \
|
/code/SuperBuild/download \
|
||||||
/code/SuperBuild/src/ceres \
|
/code/SuperBuild/src/ceres \
|
||||||
/code/SuperBuild/src/entwine \
|
/code/SuperBuild/src/untwine \
|
||||||
/code/SuperBuild/src/gflags \
|
/code/SuperBuild/src/gflags \
|
||||||
/code/SuperBuild/src/hexer \
|
/code/SuperBuild/src/hexer \
|
||||||
/code/SuperBuild/src/lastools \
|
/code/SuperBuild/src/lastools \
|
||||||
|
|
|
@ -6,10 +6,8 @@ cryptography==3.2.1
|
||||||
edt==2.0.2
|
edt==2.0.2
|
||||||
ExifRead==2.3.2
|
ExifRead==2.3.2
|
||||||
Fiona==1.8.17
|
Fiona==1.8.17
|
||||||
gpxpy==1.4.2
|
|
||||||
joblib==0.17.0
|
joblib==0.17.0
|
||||||
laspy==1.7.0
|
laspy==1.7.0
|
||||||
loky==2.9.0
|
|
||||||
lxml==4.6.1
|
lxml==4.6.1
|
||||||
matplotlib==3.3.3
|
matplotlib==3.3.3
|
||||||
networkx==2.5
|
networkx==2.5
|
||||||
|
@ -25,6 +23,7 @@ PyYAML==5.1
|
||||||
rasterio==1.1.8
|
rasterio==1.1.8
|
||||||
repoze.lru==0.7
|
repoze.lru==0.7
|
||||||
scikit-learn==0.23.2
|
scikit-learn==0.23.2
|
||||||
|
scikit-image==0.17.2
|
||||||
scipy==1.5.4
|
scipy==1.5.4
|
||||||
xmltodict==0.12.0
|
xmltodict==0.12.0
|
||||||
|
|
||||||
|
|
|
@ -202,7 +202,7 @@ parts:
|
||||||
- -odm/SuperBuild/build/openmvs
|
- -odm/SuperBuild/build/openmvs
|
||||||
- -odm/SuperBuild/download
|
- -odm/SuperBuild/download
|
||||||
- -odm/SuperBuild/src/ceres
|
- -odm/SuperBuild/src/ceres
|
||||||
- -odm/SuperBuild/src/entwine
|
- -odm/SuperBuild/src/untwine
|
||||||
- -odm/SuperBuild/src/gflags
|
- -odm/SuperBuild/src/gflags
|
||||||
- -odm/SuperBuild/src/hexer
|
- -odm/SuperBuild/src/hexer
|
||||||
- -odm/SuperBuild/src/lastools
|
- -odm/SuperBuild/src/lastools
|
||||||
|
|
|
@ -5,6 +5,7 @@ from opendm import io
|
||||||
from opendm import system
|
from opendm import system
|
||||||
from opendm import context
|
from opendm import context
|
||||||
from opendm import types
|
from opendm import types
|
||||||
|
from opendm.multispectral import get_primary_band_name
|
||||||
|
|
||||||
class ODMMvsTexStage(types.ODM_Stage):
|
class ODMMvsTexStage(types.ODM_Stage):
|
||||||
def process(self, args, outputs):
|
def process(self, args, outputs):
|
||||||
|
@ -24,7 +25,8 @@ class ODMMvsTexStage(types.ODM_Stage):
|
||||||
'out_dir': os.path.join(tree.odm_texturing, subdir),
|
'out_dir': os.path.join(tree.odm_texturing, subdir),
|
||||||
'model': tree.odm_mesh,
|
'model': tree.odm_mesh,
|
||||||
'nadir': False,
|
'nadir': False,
|
||||||
'nvm_file': nvm_file
|
'nvm_file': nvm_file,
|
||||||
|
'labeling_file': os.path.join(tree.odm_texturing, "odm_textured_model_labeling.vec") if subdir else None
|
||||||
}]
|
}]
|
||||||
|
|
||||||
if not args.use_3dmesh:
|
if not args.use_3dmesh:
|
||||||
|
@ -32,12 +34,14 @@ class ODMMvsTexStage(types.ODM_Stage):
|
||||||
'out_dir': os.path.join(tree.odm_25dtexturing, subdir),
|
'out_dir': os.path.join(tree.odm_25dtexturing, subdir),
|
||||||
'model': tree.odm_25dmesh,
|
'model': tree.odm_25dmesh,
|
||||||
'nadir': True,
|
'nadir': True,
|
||||||
'nvm_file': nvm_file
|
'nvm_file': nvm_file,
|
||||||
|
'labeling_file': os.path.join(tree.odm_25dtexturing, "odm_textured_model_labeling.vec") if subdir else None
|
||||||
}]
|
}]
|
||||||
|
|
||||||
if reconstruction.multi_camera:
|
if reconstruction.multi_camera:
|
||||||
|
|
||||||
for band in reconstruction.multi_camera:
|
for band in reconstruction.multi_camera:
|
||||||
primary = band == reconstruction.multi_camera[0]
|
primary = band['name'] == get_primary_band_name(reconstruction.multi_camera, args.primary_band)
|
||||||
nvm_file = os.path.join(tree.opensfm, "undistorted", "reconstruction_%s.nvm" % band['name'].lower())
|
nvm_file = os.path.join(tree.opensfm, "undistorted", "reconstruction_%s.nvm" % band['name'].lower())
|
||||||
add_run(nvm_file, primary, band['name'].lower())
|
add_run(nvm_file, primary, band['name'].lower())
|
||||||
else:
|
else:
|
||||||
|
@ -57,23 +61,14 @@ class ODMMvsTexStage(types.ODM_Stage):
|
||||||
% odm_textured_model_obj)
|
% odm_textured_model_obj)
|
||||||
|
|
||||||
# Format arguments to fit Mvs-Texturing app
|
# Format arguments to fit Mvs-Texturing app
|
||||||
skipGeometricVisibilityTest = ""
|
|
||||||
skipGlobalSeamLeveling = ""
|
skipGlobalSeamLeveling = ""
|
||||||
skipLocalSeamLeveling = ""
|
skipLocalSeamLeveling = ""
|
||||||
skipHoleFilling = ""
|
|
||||||
keepUnseenFaces = ""
|
|
||||||
nadir = ""
|
nadir = ""
|
||||||
|
|
||||||
if (self.params.get('skip_vis_test')):
|
|
||||||
skipGeometricVisibilityTest = "--skip_geometric_visibility_test"
|
|
||||||
if (self.params.get('skip_glob_seam_leveling')):
|
if (self.params.get('skip_glob_seam_leveling')):
|
||||||
skipGlobalSeamLeveling = "--skip_global_seam_leveling"
|
skipGlobalSeamLeveling = "--skip_global_seam_leveling"
|
||||||
if (self.params.get('skip_loc_seam_leveling')):
|
if (self.params.get('skip_loc_seam_leveling')):
|
||||||
skipLocalSeamLeveling = "--skip_local_seam_leveling"
|
skipLocalSeamLeveling = "--skip_local_seam_leveling"
|
||||||
if (self.params.get('skip_hole_fill')):
|
|
||||||
skipHoleFilling = "--skip_hole_filling"
|
|
||||||
if (self.params.get('keep_unseen_faces')):
|
|
||||||
keepUnseenFaces = "--keep_unseen_faces"
|
|
||||||
if (r['nadir']):
|
if (r['nadir']):
|
||||||
nadir = '--nadir_mode'
|
nadir = '--nadir_mode'
|
||||||
|
|
||||||
|
@ -84,14 +79,13 @@ class ODMMvsTexStage(types.ODM_Stage):
|
||||||
'model': r['model'],
|
'model': r['model'],
|
||||||
'dataTerm': self.params.get('data_term'),
|
'dataTerm': self.params.get('data_term'),
|
||||||
'outlierRemovalType': self.params.get('outlier_rem_type'),
|
'outlierRemovalType': self.params.get('outlier_rem_type'),
|
||||||
'skipGeometricVisibilityTest': skipGeometricVisibilityTest,
|
|
||||||
'skipGlobalSeamLeveling': skipGlobalSeamLeveling,
|
'skipGlobalSeamLeveling': skipGlobalSeamLeveling,
|
||||||
'skipLocalSeamLeveling': skipLocalSeamLeveling,
|
'skipLocalSeamLeveling': skipLocalSeamLeveling,
|
||||||
'skipHoleFilling': skipHoleFilling,
|
|
||||||
'keepUnseenFaces': keepUnseenFaces,
|
|
||||||
'toneMapping': self.params.get('tone_mapping'),
|
'toneMapping': self.params.get('tone_mapping'),
|
||||||
'nadirMode': nadir,
|
'nadirMode': nadir,
|
||||||
'nvm_file': r['nvm_file']
|
'nvm_file': r['nvm_file'],
|
||||||
|
'intermediate': '--no_intermediate_results' if (r['labeling_file'] or not reconstruction.multi_camera) else '',
|
||||||
|
'labelingFile': '-L "%s"' % r['labeling_file'] if r['labeling_file'] else ''
|
||||||
}
|
}
|
||||||
|
|
||||||
mvs_tmp_dir = os.path.join(r['out_dir'], 'tmp')
|
mvs_tmp_dir = os.path.join(r['out_dir'], 'tmp')
|
||||||
|
@ -105,21 +99,11 @@ class ODMMvsTexStage(types.ODM_Stage):
|
||||||
system.run('{bin} {nvm_file} {model} {out_dir} '
|
system.run('{bin} {nvm_file} {model} {out_dir} '
|
||||||
'-d {dataTerm} -o {outlierRemovalType} '
|
'-d {dataTerm} -o {outlierRemovalType} '
|
||||||
'-t {toneMapping} '
|
'-t {toneMapping} '
|
||||||
'{skipGeometricVisibilityTest} '
|
'{intermediate} '
|
||||||
'{skipGlobalSeamLeveling} '
|
'{skipGlobalSeamLeveling} '
|
||||||
'{skipLocalSeamLeveling} '
|
'{skipLocalSeamLeveling} '
|
||||||
'{skipHoleFilling} '
|
'{nadirMode} '
|
||||||
'{keepUnseenFaces} '
|
'{labelingFile} '.format(**kwargs))
|
||||||
'{nadirMode}'.format(**kwargs))
|
|
||||||
|
|
||||||
if args.optimize_disk_space:
|
|
||||||
cleanup_files = [
|
|
||||||
os.path.join(r['out_dir'], "odm_textured_model_data_costs.spt"),
|
|
||||||
os.path.join(r['out_dir'], "odm_textured_model_labeling.vec"),
|
|
||||||
]
|
|
||||||
for f in cleanup_files:
|
|
||||||
if io.file_exists(f):
|
|
||||||
os.remove(f)
|
|
||||||
|
|
||||||
progress += progress_per_run
|
progress += progress_per_run
|
||||||
self.update_progress(progress)
|
self.update_progress(progress)
|
||||||
|
|
|
@ -46,11 +46,8 @@ class ODMApp:
|
||||||
texturing = ODMMvsTexStage('mvs_texturing', args, progress=70.0,
|
texturing = ODMMvsTexStage('mvs_texturing', args, progress=70.0,
|
||||||
data_term=args.texturing_data_term,
|
data_term=args.texturing_data_term,
|
||||||
outlier_rem_type=args.texturing_outlier_removal_type,
|
outlier_rem_type=args.texturing_outlier_removal_type,
|
||||||
skip_vis_test=args.texturing_skip_visibility_test,
|
|
||||||
skip_glob_seam_leveling=args.texturing_skip_global_seam_leveling,
|
skip_glob_seam_leveling=args.texturing_skip_global_seam_leveling,
|
||||||
skip_loc_seam_leveling=args.texturing_skip_local_seam_leveling,
|
skip_loc_seam_leveling=args.texturing_skip_local_seam_leveling,
|
||||||
skip_hole_fill=args.texturing_skip_hole_filling,
|
|
||||||
keep_unseen_faces=args.texturing_keep_unseen_faces,
|
|
||||||
tone_mapping=args.texturing_tone_mapping)
|
tone_mapping=args.texturing_tone_mapping)
|
||||||
georeferencing = ODMGeoreferencingStage('odm_georeferencing', args, progress=80.0,
|
georeferencing = ODMGeoreferencingStage('odm_georeferencing', args, progress=80.0,
|
||||||
gcp_file=args.gcp,
|
gcp_file=args.gcp,
|
||||||
|
|
|
@ -9,6 +9,7 @@ from opendm import system
|
||||||
from opendm import context
|
from opendm import context
|
||||||
from opendm.cropper import Cropper
|
from opendm.cropper import Cropper
|
||||||
from opendm import point_cloud
|
from opendm import point_cloud
|
||||||
|
from opendm.multispectral import get_primary_band_name
|
||||||
|
|
||||||
class ODMGeoreferencingStage(types.ODM_Stage):
|
class ODMGeoreferencingStage(types.ODM_Stage):
|
||||||
def process(self, args, outputs):
|
def process(self, args, outputs):
|
||||||
|
@ -45,7 +46,7 @@ class ODMGeoreferencingStage(types.ODM_Stage):
|
||||||
|
|
||||||
if reconstruction.multi_camera:
|
if reconstruction.multi_camera:
|
||||||
for band in reconstruction.multi_camera:
|
for band in reconstruction.multi_camera:
|
||||||
primary = band == reconstruction.multi_camera[0]
|
primary = band['name'] == get_primary_band_name(reconstruction.multi_camera, args.primary_band)
|
||||||
add_run(primary, band['name'].lower())
|
add_run(primary, band['name'].lower())
|
||||||
else:
|
else:
|
||||||
add_run()
|
add_run()
|
||||||
|
@ -122,15 +123,14 @@ class ODMGeoreferencingStage(types.ODM_Stage):
|
||||||
|
|
||||||
if args.fast_orthophoto:
|
if args.fast_orthophoto:
|
||||||
decimation_step = 10
|
decimation_step = 10
|
||||||
elif args.use_opensfm_dense:
|
|
||||||
decimation_step = 40
|
|
||||||
else:
|
else:
|
||||||
decimation_step = 90
|
decimation_step = 40
|
||||||
|
|
||||||
# More aggressive decimation for large datasets
|
# More aggressive decimation for large datasets
|
||||||
if not args.fast_orthophoto:
|
if not args.fast_orthophoto:
|
||||||
decimation_step *= int(len(reconstruction.photos) / 1000) + 1
|
decimation_step *= int(len(reconstruction.photos) / 1000) + 1
|
||||||
|
decimation_step = min(decimation_step, 95)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
cropper.create_bounds_gpkg(tree.odm_georeferencing_model_laz, args.crop,
|
cropper.create_bounds_gpkg(tree.odm_georeferencing_model_laz, args.crop,
|
||||||
decimation_step=decimation_step)
|
decimation_step=decimation_step)
|
||||||
|
|
|
@ -11,6 +11,7 @@ from opendm.concurrency import get_max_memory
|
||||||
from opendm.cutline import compute_cutline
|
from opendm.cutline import compute_cutline
|
||||||
from pipes import quote
|
from pipes import quote
|
||||||
from opendm import pseudogeo
|
from opendm import pseudogeo
|
||||||
|
from opendm.multispectral import get_primary_band_name
|
||||||
|
|
||||||
class ODMOrthoPhotoStage(types.ODM_Stage):
|
class ODMOrthoPhotoStage(types.ODM_Stage):
|
||||||
def process(self, args, outputs):
|
def process(self, args, outputs):
|
||||||
|
@ -72,7 +73,7 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
|
||||||
|
|
||||||
if reconstruction.multi_camera:
|
if reconstruction.multi_camera:
|
||||||
for band in reconstruction.multi_camera:
|
for band in reconstruction.multi_camera:
|
||||||
primary = band == reconstruction.multi_camera[0]
|
primary = band['name'] == get_primary_band_name(reconstruction.multi_camera, args.primary_band)
|
||||||
subdir = ""
|
subdir = ""
|
||||||
if not primary:
|
if not primary:
|
||||||
subdir = band['name'].lower()
|
subdir = band['name'].lower()
|
||||||
|
|
|
@ -8,6 +8,7 @@ from opendm import point_cloud
|
||||||
from opendm import types
|
from opendm import types
|
||||||
from opendm.utils import get_depthmap_resolution
|
from opendm.utils import get_depthmap_resolution
|
||||||
from opendm.osfm import OSFMContext
|
from opendm.osfm import OSFMContext
|
||||||
|
from opendm.multispectral import get_primary_band_name
|
||||||
|
|
||||||
class ODMOpenMVSStage(types.ODM_Stage):
|
class ODMOpenMVSStage(types.ODM_Stage):
|
||||||
def process(self, args, outputs):
|
def process(self, args, outputs):
|
||||||
|
@ -28,11 +29,6 @@ class ODMOpenMVSStage(types.ODM_Stage):
|
||||||
# export reconstruction from opensfm
|
# export reconstruction from opensfm
|
||||||
octx = OSFMContext(tree.opensfm)
|
octx = OSFMContext(tree.opensfm)
|
||||||
cmd = 'export_openmvs'
|
cmd = 'export_openmvs'
|
||||||
if reconstruction.multi_camera:
|
|
||||||
# Export only the primary band
|
|
||||||
primary = reconstruction.multi_camera[0]
|
|
||||||
image_list = os.path.join(tree.opensfm, "image_list_%s.txt" % primary['name'].lower())
|
|
||||||
cmd += ' --image_list "%s"' % image_list
|
|
||||||
octx.run(cmd)
|
octx.run(cmd)
|
||||||
|
|
||||||
self.update_progress(10)
|
self.update_progress(10)
|
||||||
|
|
|
@ -13,6 +13,7 @@ from opendm import types
|
||||||
from opendm.utils import get_depthmap_resolution
|
from opendm.utils import get_depthmap_resolution
|
||||||
from opendm.osfm import OSFMContext
|
from opendm.osfm import OSFMContext
|
||||||
from opendm import multispectral
|
from opendm import multispectral
|
||||||
|
from opendm import nvm
|
||||||
|
|
||||||
class ODMOpenSfMStage(types.ODM_Stage):
|
class ODMOpenSfMStage(types.ODM_Stage):
|
||||||
def process(self, args, outputs):
|
def process(self, args, outputs):
|
||||||
|
@ -25,7 +26,7 @@ class ODMOpenSfMStage(types.ODM_Stage):
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
octx = OSFMContext(tree.opensfm)
|
octx = OSFMContext(tree.opensfm)
|
||||||
octx.setup(args, tree.dataset_raw, photos, reconstruction=reconstruction, rerun=self.rerun())
|
octx.setup(args, tree.dataset_raw, reconstruction=reconstruction, rerun=self.rerun())
|
||||||
octx.extract_metadata(self.rerun())
|
octx.extract_metadata(self.rerun())
|
||||||
self.update_progress(20)
|
self.update_progress(20)
|
||||||
octx.feature_matching(self.rerun())
|
octx.feature_matching(self.rerun())
|
||||||
|
@ -48,13 +49,6 @@ class ODMOpenSfMStage(types.ODM_Stage):
|
||||||
self.next_stage = None
|
self.next_stage = None
|
||||||
return
|
return
|
||||||
|
|
||||||
if args.fast_orthophoto:
|
|
||||||
output_file = octx.path('reconstruction.ply')
|
|
||||||
elif args.use_opensfm_dense:
|
|
||||||
output_file = tree.opensfm_model
|
|
||||||
else:
|
|
||||||
output_file = tree.opensfm_reconstruction
|
|
||||||
|
|
||||||
updated_config_flag_file = octx.path('updated_config.txt')
|
updated_config_flag_file = octx.path('updated_config.txt')
|
||||||
|
|
||||||
# Make sure it's capped by the depthmap-resolution arg,
|
# Make sure it's capped by the depthmap-resolution arg,
|
||||||
|
@ -68,56 +62,122 @@ class ODMOpenSfMStage(types.ODM_Stage):
|
||||||
octx.update_config({'undistorted_image_max_size': outputs['undist_image_max_size']})
|
octx.update_config({'undistorted_image_max_size': outputs['undist_image_max_size']})
|
||||||
octx.touch(updated_config_flag_file)
|
octx.touch(updated_config_flag_file)
|
||||||
|
|
||||||
# These will be used for texturing / MVS
|
# Undistorted images will be used for texturing / MVS
|
||||||
if args.radiometric_calibration == "none":
|
|
||||||
octx.convert_and_undistort(self.rerun())
|
|
||||||
else:
|
|
||||||
def radiometric_calibrate(shot_id, image):
|
|
||||||
photo = reconstruction.get_photo(shot_id)
|
|
||||||
return multispectral.dn_to_reflectance(photo, image, use_sun_sensor=args.radiometric_calibration=="camera+sun")
|
|
||||||
|
|
||||||
octx.convert_and_undistort(self.rerun(), radiometric_calibrate)
|
alignment_info = None
|
||||||
|
primary_band_name = None
|
||||||
|
undistort_pipeline = []
|
||||||
|
|
||||||
|
def undistort_callback(shot_id, image):
|
||||||
|
for func in undistort_pipeline:
|
||||||
|
image = func(shot_id, image)
|
||||||
|
return image
|
||||||
|
|
||||||
|
def radiometric_calibrate(shot_id, image):
|
||||||
|
photo = reconstruction.get_photo(shot_id)
|
||||||
|
return multispectral.dn_to_reflectance(photo, image, use_sun_sensor=args.radiometric_calibration=="camera+sun")
|
||||||
|
|
||||||
|
|
||||||
|
def align_to_primary_band(shot_id, image):
|
||||||
|
photo = reconstruction.get_photo(shot_id)
|
||||||
|
|
||||||
|
# No need to align primary
|
||||||
|
if photo.band_name == primary_band_name:
|
||||||
|
return image
|
||||||
|
|
||||||
|
ainfo = alignment_info.get(photo.band_name)
|
||||||
|
if ainfo is not None:
|
||||||
|
return multispectral.align_image(image, ainfo['warp_matrix'], ainfo['dimension'])
|
||||||
|
else:
|
||||||
|
log.ODM_WARNING("Cannot align %s, no alignment matrix could be computed. Band alignment quality might be affected." % (shot_id))
|
||||||
|
return image
|
||||||
|
|
||||||
|
if args.radiometric_calibration != "none":
|
||||||
|
undistort_pipeline.append(radiometric_calibrate)
|
||||||
|
|
||||||
|
image_list_override = None
|
||||||
|
|
||||||
|
if reconstruction.multi_camera:
|
||||||
|
|
||||||
|
# Undistort only secondary bands
|
||||||
|
image_list_override = [os.path.join(tree.dataset_raw, p.filename) for p in photos] # if p.band_name.lower() != primary_band_name.lower()
|
||||||
|
|
||||||
|
# We backup the original reconstruction.json, tracks.csv
|
||||||
|
# then we augment them by duplicating the primary band
|
||||||
|
# camera shots with each band, so that exports, undistortion,
|
||||||
|
# etc. include all bands
|
||||||
|
# We finally restore the original files later
|
||||||
|
|
||||||
|
added_shots_file = octx.path('added_shots_done.txt')
|
||||||
|
|
||||||
|
if not io.file_exists(added_shots_file) or self.rerun():
|
||||||
|
primary_band_name = multispectral.get_primary_band_name(reconstruction.multi_camera, args.primary_band)
|
||||||
|
s2p, p2s = multispectral.compute_band_maps(reconstruction.multi_camera, primary_band_name)
|
||||||
|
alignment_info = multispectral.compute_alignment_matrices(reconstruction.multi_camera, primary_band_name, tree.dataset_raw, s2p, p2s, max_concurrency=args.max_concurrency)
|
||||||
|
|
||||||
|
log.ODM_INFO("Adding shots to reconstruction")
|
||||||
|
|
||||||
|
octx.backup_reconstruction()
|
||||||
|
octx.add_shots_to_reconstruction(p2s)
|
||||||
|
octx.touch(added_shots_file)
|
||||||
|
|
||||||
|
undistort_pipeline.append(align_to_primary_band)
|
||||||
|
|
||||||
|
octx.convert_and_undistort(self.rerun(), undistort_callback, image_list_override)
|
||||||
|
|
||||||
self.update_progress(80)
|
self.update_progress(80)
|
||||||
|
|
||||||
if reconstruction.multi_camera:
|
if reconstruction.multi_camera:
|
||||||
# Dump band image lists
|
octx.restore_reconstruction_backup()
|
||||||
log.ODM_INFO("Multiple bands found")
|
|
||||||
for band in reconstruction.multi_camera:
|
|
||||||
log.ODM_INFO("Exporting %s band" % band['name'])
|
|
||||||
image_list_file = octx.path("image_list_%s.txt" % band['name'].lower())
|
|
||||||
|
|
||||||
if not io.file_exists(image_list_file) or self.rerun():
|
# Undistort primary band and write undistorted
|
||||||
with open(image_list_file, "w") as f:
|
# reconstruction.json, tracks.csv
|
||||||
f.write("\n".join([p.filename for p in band['photos']]))
|
octx.convert_and_undistort(self.rerun(), undistort_callback, runId='primary')
|
||||||
log.ODM_INFO("Wrote %s" % image_list_file)
|
|
||||||
else:
|
|
||||||
log.ODM_WARNING("Found a valid image list in %s for %s band" % (image_list_file, band['name']))
|
|
||||||
|
|
||||||
nvm_file = octx.path("undistorted", "reconstruction_%s.nvm" % band['name'].lower())
|
|
||||||
if not io.file_exists(nvm_file) or self.rerun():
|
|
||||||
octx.run('export_visualsfm --points --image_list "%s"' % image_list_file)
|
|
||||||
os.rename(tree.opensfm_reconstruction_nvm, nvm_file)
|
|
||||||
else:
|
|
||||||
log.ODM_WARNING("Found a valid NVM file in %s for %s band" % (nvm_file, band['name']))
|
|
||||||
|
|
||||||
if not io.file_exists(tree.opensfm_reconstruction_nvm) or self.rerun():
|
if not io.file_exists(tree.opensfm_reconstruction_nvm) or self.rerun():
|
||||||
octx.run('export_visualsfm --points')
|
octx.run('export_visualsfm --points')
|
||||||
else:
|
else:
|
||||||
log.ODM_WARNING('Found a valid OpenSfM NVM reconstruction file in: %s' %
|
log.ODM_WARNING('Found a valid OpenSfM NVM reconstruction file in: %s' %
|
||||||
tree.opensfm_reconstruction_nvm)
|
tree.opensfm_reconstruction_nvm)
|
||||||
|
|
||||||
|
if reconstruction.multi_camera:
|
||||||
|
log.ODM_INFO("Multiple bands found")
|
||||||
|
|
||||||
|
# Write NVM files for the various bands
|
||||||
|
for band in reconstruction.multi_camera:
|
||||||
|
nvm_file = octx.path("undistorted", "reconstruction_%s.nvm" % band['name'].lower())
|
||||||
|
|
||||||
|
img_map = {}
|
||||||
|
for fname in p2s:
|
||||||
|
|
||||||
|
# Primary band maps to itself
|
||||||
|
if band['name'] == primary_band_name:
|
||||||
|
img_map[fname + '.tif'] = fname + '.tif'
|
||||||
|
else:
|
||||||
|
band_filename = next((p.filename for p in p2s[fname] if p.band_name == band['name']), None)
|
||||||
|
|
||||||
|
if band_filename is not None:
|
||||||
|
img_map[fname + '.tif'] = band_filename + '.tif'
|
||||||
|
else:
|
||||||
|
log.ODM_WARNING("Cannot find %s band equivalent for %s" % (band, fname))
|
||||||
|
|
||||||
|
nvm.replace_nvm_images(tree.opensfm_reconstruction_nvm, img_map, nvm_file)
|
||||||
|
|
||||||
self.update_progress(85)
|
self.update_progress(85)
|
||||||
|
|
||||||
# Skip dense reconstruction if necessary and export
|
# Skip dense reconstruction if necessary and export
|
||||||
# sparse reconstruction instead
|
# sparse reconstruction instead
|
||||||
if args.fast_orthophoto:
|
if args.fast_orthophoto:
|
||||||
|
output_file = octx.path('reconstruction.ply')
|
||||||
|
|
||||||
if not io.file_exists(output_file) or self.rerun():
|
if not io.file_exists(output_file) or self.rerun():
|
||||||
octx.run('export_ply --no-cameras')
|
octx.run('export_ply --no-cameras')
|
||||||
else:
|
else:
|
||||||
log.ODM_WARNING("Found a valid PLY reconstruction in %s" % output_file)
|
log.ODM_WARNING("Found a valid PLY reconstruction in %s" % output_file)
|
||||||
|
|
||||||
elif args.use_opensfm_dense:
|
elif args.use_opensfm_dense:
|
||||||
|
output_file = tree.opensfm_model
|
||||||
|
|
||||||
if not io.file_exists(output_file) or self.rerun():
|
if not io.file_exists(output_file) or self.rerun():
|
||||||
octx.run('compute_depthmaps')
|
octx.run('compute_depthmaps')
|
||||||
else:
|
else:
|
||||||
|
@ -132,6 +192,9 @@ class ODMOpenSfMStage(types.ODM_Stage):
|
||||||
|
|
||||||
if args.optimize_disk_space:
|
if args.optimize_disk_space:
|
||||||
os.remove(octx.path("tracks.csv"))
|
os.remove(octx.path("tracks.csv"))
|
||||||
|
if io.file_exists(octx.recon_backup_file()):
|
||||||
|
os.remove(octx.recon_backup_file())
|
||||||
|
|
||||||
if io.dir_exists(octx.path("undistorted", "depthmaps")):
|
if io.dir_exists(octx.path("undistorted", "depthmaps")):
|
||||||
files = glob.glob(octx.path("undistorted", "depthmaps", "*.npz"))
|
files = glob.glob(octx.path("undistorted", "depthmaps", "*.npz"))
|
||||||
for f in files:
|
for f in files:
|
||||||
|
|
|
@ -50,7 +50,7 @@ class ODMSplitStage(types.ODM_Stage):
|
||||||
"submodel_overlap: %s" % args.split_overlap,
|
"submodel_overlap: %s" % args.split_overlap,
|
||||||
]
|
]
|
||||||
|
|
||||||
octx.setup(args, tree.dataset_raw, photos, reconstruction=reconstruction, append_config=config, rerun=self.rerun())
|
octx.setup(args, tree.dataset_raw, reconstruction=reconstruction, append_config=config, rerun=self.rerun())
|
||||||
octx.extract_metadata(self.rerun())
|
octx.extract_metadata(self.rerun())
|
||||||
|
|
||||||
self.update_progress(5)
|
self.update_progress(5)
|
||||||
|
|
|
@ -11,7 +11,7 @@ if [ "$1" = "--setup" ]; then
|
||||||
bash configure.sh reinstall
|
bash configure.sh reinstall
|
||||||
|
|
||||||
touch .setupdevenv
|
touch .setupdevenv
|
||||||
apt update && apt install -y vim
|
apt update && apt install -y vim git
|
||||||
chown -R $3:$4 /code
|
chown -R $3:$4 /code
|
||||||
chown -R $3:$4 /var/www
|
chown -R $3:$4 /var/www
|
||||||
fi
|
fi
|
||||||
|
@ -22,6 +22,7 @@ if [ "$1" = "--setup" ]; then
|
||||||
echo "$2:x:$4:" >> /etc/group
|
echo "$2:x:$4:" >> /etc/group
|
||||||
echo "Adding $2 to /etc/shadow"
|
echo "Adding $2 to /etc/shadow"
|
||||||
echo "$2:x:14871::::::" >> /etc/shadow
|
echo "$2:x:14871::::::" >> /etc/shadow
|
||||||
|
echo "$2 ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
|
||||||
echo "echo '' && echo '' && echo '' && echo '###################################' && echo 'ODM Dev Environment Ready. Hack on!' && echo '###################################' && echo '' && cd /code" > $HOME/.bashrc
|
echo "echo '' && echo '' && echo '' && echo '###################################' && echo 'ODM Dev Environment Ready. Hack on!' && echo '###################################' && echo '' && cd /code" > $HOME/.bashrc
|
||||||
|
|
||||||
# Install qt creator
|
# Install qt creator
|
||||||
|
|
Ładowanie…
Reference in New Issue