Porównaj commity

...

207 Commity

Autor SHA1 Wiadomość Data
Piero Toffanin ae6726e536
Merge pull request #1760 from pierotofy/fastcut
Skip feathered raster generation when possible
2024-05-17 15:51:32 -04:00
Piero Toffanin 6da366f806 Windows fix 2024-05-17 15:23:10 -04:00
Piero Toffanin e4e27c21f2 Skip feathered raster generation when possible 2024-05-17 14:55:26 -04:00
Piero Toffanin f9136f7a0d
Merge pull request #1758 from idimitrovski/master
Support for DJI Mavic 2 Zoom srt files
2024-05-10 11:24:52 -04:00
idimitrovski a2d9eccad5 Support for DJI Mavic 2 Zoom srt files 2024-05-10 09:29:37 +02:00
Piero Toffanin 424d9e28a0
Merge pull request #1756 from andrewharvey/patch-1
Fix PoissonRecon failed with n threads log message
2024-04-18 11:53:46 -04:00
Andrew Harvey a0fbd71d41
Fix PoissonRecon failed with n threads log message
The message was reporting failure with n threads and retrying with n // 2, however a few lines up threads was already set to n // 2 representing the next thread count to try.
2024-04-18 15:35:53 +10:00
Piero Toffanin 6084d1dca0
Merge pull request #1754 from pierotofy/minviews
Use min views filter = 1
2024-04-11 23:18:26 -04:00
Piero Toffanin aef4182cf9 More solid OpenMVS clustering fallback 2024-04-11 14:18:20 -04:00
Piero Toffanin 6c0fe6e79d Bump version 2024-04-10 22:07:36 -04:00
Piero Toffanin 17dfc7599a Update pc-filter value to 5 2024-04-10 13:48:11 -04:00
Piero Toffanin a70e7445ad Update default feature-type, pc-filter values 2024-04-10 12:26:34 -04:00
Piero Toffanin 981bf88b48 Use min views filter = 1 2024-04-10 11:13:58 -04:00
Piero Toffanin ad63392e1a
Merge pull request #1752 from pierotofy/geobom
Fix BOM encoding bug with geo files
2024-04-02 12:53:35 -04:00
Piero Toffanin 77f8ffc8cd Fix BOM encoding bug with geo files 2024-04-02 12:46:20 -04:00
Piero Toffanin 4d7cf32a8c
Merge pull request #1751 from smathermather/fish-aye
replace fisheye with fisheye_opencv but keep API the same until 4.0
2024-03-11 23:32:19 -04:00
Stephen Mather 5a439c0ab6 replace fisheye with fisheye_opencv but keep API the same until 4.0 2024-03-11 22:56:48 -04:00
Piero Toffanin ffcda0dc57
Merge pull request #1749 from smathermather/increase-default-GPS-Accuracy
increase default GPS-Accuracy to 3m
2024-03-08 22:16:44 -05:00
Stephen Mather 2c6fd1dd9f
increase default GPS-Accuracy to 3m 2024-03-08 22:13:54 -05:00
Sylvain POULAIN cb3229a3d4
Add Mavic 3 rolling shutter, not enterprise version (#1747)
* Add Mavic 3 rolling shutter

* M3
2024-02-12 09:50:22 -05:00
Piero Toffanin fc9c94880f
Merge pull request #1746 from kielnino/set-extensionsused
GLTF - obj2glb - Set extensionsUsed in all cases to be consistent with the GLTF standard
2024-02-09 10:23:22 -05:00
kielnino b204a2eb98
set extensionsUsed in all cases 2024-02-09 15:06:02 +01:00
Piero Toffanin d9f77bea54
Merge pull request #1744 from kielnino/remove-unuses-mvs_tmp_dir
Update comment on mvs_tmp_dir
2024-02-01 09:14:23 -05:00
kielnino 10947ecddf
clarify usage of tmp directory 2024-02-01 12:02:06 +01:00
kielnino f7c7044823
remove unused mvs_tmp_dir 2024-02-01 09:25:10 +01:00
Piero Toffanin ae50133886
Merge pull request #1742 from pierotofy/eptclass
Classify point cloud before generating derivative outputs
2024-01-25 12:56:36 -05:00
Piero Toffanin 9fd3bf3edd Improve SRT parser to handle abs_alt altitude reference 2024-01-23 22:24:38 +00:00
Piero Toffanin fb85b754fb Classify point cloud before generating derivative outputs 2024-01-23 17:03:36 -05:00
Piero Toffanin 30f89c068c
Merge pull request #1739 from pierotofy/smartband
Fix build
2024-01-15 19:41:20 -05:00
Piero Toffanin 260b4ef864 Manually install numpy 2024-01-15 16:21:08 -05:00
Piero Toffanin fb5d88366e
Merge pull request #1738 from pierotofy/smartband
Ignore multispectral band groups that are missing images
2024-01-15 11:38:53 -05:00
Piero Toffanin f793627402 Ignore multispectral band groups that are missing images 2024-01-15 09:51:17 -05:00
Piero Toffanin 9183218f1b Bump version 2024-01-12 00:20:35 -05:00
Piero Toffanin 1283df206e
Merge pull request #1732 from OpenDroneMap/nolocalseam
Deprecate texturing-skip-local-seam-leveling
2023-12-11 15:25:51 -05:00
Piero Toffanin 76a061b86a Deprecate texturing-skip-local-seam-leveling 2023-12-11 14:57:21 -05:00
Piero Toffanin 32d933027e
Merge pull request #1731 from pierotofy/median
C++ median smoothing filter
2023-12-08 11:34:05 -05:00
Piero Toffanin a29280157e Add radius parameter 2023-12-07 16:12:15 -05:00
Piero Toffanin 704c285b8f Remove eigen dep 2023-12-07 15:58:12 -05:00
Piero Toffanin 5674e68e9f Median filtering using fastrasterfilter 2023-12-07 18:49:43 +00:00
Piero Toffanin d419d9f038
Merge pull request #1729 from pierotofy/corridor
dem2mesh improvements
2023-12-06 19:34:47 -05:00
Piero Toffanin b3ae35f5e5 Update dem2mesh 2023-12-06 13:52:24 -05:00
Piero Toffanin 18d4d31be7 Fix pc2dem.py 2023-12-06 12:34:07 -05:00
Piero Toffanin 16ccd277ec
Merge pull request #1728 from pierotofy/corridor
Improved DEM generation efficiency
2023-12-05 14:06:54 -05:00
Piero Toffanin 7048868f28 Improved DEM generation efficiency 2023-12-05 14:01:15 -05:00
Piero Toffanin b14ffd919a Remove need for one intermediate raster 2023-12-05 12:26:14 -05:00
Piero Toffanin 4d1d0350a5
Update issue-triage.yml 2023-12-05 10:12:06 -05:00
Piero Toffanin 7261c29efc Respect max_tile_size parameter 2023-12-04 22:49:06 -05:00
Piero Toffanin 2ccad6ee9d Fix renderdem bounds calculation 2023-12-04 22:26:04 -05:00
Piero Toffanin 6acf9835e5 Update issue-triage.yml 2023-11-29 23:34:52 -05:00
Piero Toffanin 5b5df3aaf7 Add issue-trage.yml 2023-11-29 23:33:36 -05:00
Piero Toffanin 26cc9fbf93
Merge pull request #1725 from pierotofy/renderdem
Render DEM tiles using RenderDEM
2023-11-28 13:10:35 -05:00
Piero Toffanin b08f955963 Use URL 2023-11-28 11:33:09 -05:00
Piero Toffanin d028873f63 Use PDAL fork 2023-11-28 11:24:50 -05:00
Piero Toffanin 2d2b809530 Set maxTiles check only in absence of georeferenced photos 2023-11-28 00:43:11 -05:00
Piero Toffanin 7e05a5b04e Minor fix 2023-11-27 16:34:08 -05:00
Piero Toffanin e0ab6ae7ed Bump version 2023-11-27 16:25:11 -05:00
Piero Toffanin eceae8d2e4 Render DEM tiles using RenderDEM 2023-11-27 16:20:21 -05:00
Piero Toffanin 55570385c1
Merge pull request #1720 from pierotofy/autorerun
Feat: Auto rerun-from
2023-11-13 13:42:04 -05:00
Piero Toffanin eed840c9bb Always auto-rerun from beginning with split 2023-11-13 13:40:55 -05:00
Piero Toffanin 8376f24f08 Remove duplicate stmt 2023-11-08 11:40:22 -05:00
Piero Toffanin 6d70a4f0be Fix processopts slice 2023-11-08 11:14:15 -05:00
Piero Toffanin 6df5e0b711 Feat: Auto rerun-from 2023-11-08 11:07:20 -05:00
Piero Toffanin 5d9564fda3
Merge pull request #1717 from pierotofy/fcp
Pin Eigen 3.4
2023-11-05 15:52:28 -05:00
Piero Toffanin eccb203d7a Pin eigen34 2023-11-05 15:45:12 -05:00
Piero Toffanin 2df4afaecf
Merge pull request #1716 from pierotofy/fcp
Fix fast_floor in FPC Filter, Invalid PLY file (expected 'property uint8 views')
2023-11-04 20:22:44 -04:00
Piero Toffanin e5ed68846e Fix OpenMVS subscene logic 2023-11-04 20:00:57 -04:00
Piero Toffanin 7cf71628f3 Fix fast_floor in FPC Filter 2023-11-04 13:32:40 -04:00
Piero Toffanin 237bf8fb87 Remove snap build 2023-10-30 00:39:09 -04:00
Piero Toffanin a542e7b78d
Merge pull request #1714 from pierotofy/dsp
Adaptive feature quality
2023-10-30 00:36:54 -04:00
Piero Toffanin 52fa5d12e6 Adaptive feature quality 2023-10-29 19:19:20 -04:00
Piero Toffanin e3296f0379
Merge pull request #1712 from pierotofy/dsp
Adds support for DSP SIFT
2023-10-29 18:29:11 -04:00
Piero Toffanin a06f6f19b2 Update OpenSfM 2023-10-29 17:54:12 -04:00
Piero Toffanin 2d94934595 Adds support for DSP SIFT 2023-10-27 22:33:43 -04:00
Piero Toffanin 08d03905e6
Merge pull request #1705 from MertenF/master
Make tiler zoom level configurable
2023-10-16 12:30:29 -04:00
Merten Fermont f70e55c9eb
Limit maximum tiler zoom level to 23 2023-10-16 09:59:48 +02:00
Merten Fermont a89803c2eb Use dem instead of orthophoto resolution for generating DEM tiles 2023-10-15 23:47:43 +02:00
Piero Toffanin de7595aeef
Merge pull request #1708 from pierotofy/reportmv
Add extra report file op, disable snap builds
2023-10-14 14:20:06 -04:00
Piero Toffanin aa0e9f68df Rename build file 2023-10-14 01:57:12 -04:00
Piero Toffanin 7ca122dbf6 Remove WSL install 2023-10-14 01:55:40 -04:00
Piero Toffanin 0d303aab16 Disable snap 2023-10-14 01:32:55 -04:00
Piero Toffanin 6dc0c98fa0 Remove previous report before move 2023-10-14 01:22:58 -04:00
Merten Fermont c679d400c8 Tiler zoom level is calculated from GSD
Instead of hardcoding a value, calculate the maximum zoomlevel in which there is still an increase in detail. By using the configured orthophoto resolution or GSD.

The higher the latitude, the higher the resolution will be of the tile.
Resulting in a chance of generating useless tiles, as there is no compensation for this.
At the moment it'll use the worst-case resolution from the equator.

Zoom level calulation from: https://wiki.openstreetmap.org/wiki/Zoom_levels
2023-10-12 22:22:10 +02:00
Piero Toffanin 38af615657
Merge pull request #1704 from pierotofy/gpurunner
Run GPU build on self-hosted runner
2023-10-05 15:30:31 -04:00
Piero Toffanin fc8dd7c5c5 Run GPU build on self-hosted runner 2023-10-05 15:29:14 -04:00
Piero Toffanin 6eca279c4b
Merge pull request #1702 from pierotofy/altumpt
Altum-PT support
2023-10-04 13:51:48 -04:00
Piero Toffanin 681ee18925 Adds support for Altum-PT 2023-10-03 13:06:36 -04:00
Piero Toffanin f9a3c5eb0e
Merge pull request #1701 from pierotofy/windate
Fix start/end date on Windows and enforce band order normalization
2023-10-02 10:14:25 -04:00
Piero Toffanin a56b52d0df Pick green band by default, improve mavic 3M support 2023-09-29 13:54:01 -04:00
Piero Toffanin f6be28db2a Give RGB, Blue priority 2023-09-29 13:11:11 -04:00
Piero Toffanin 5988be1f57 Bump version 2023-09-29 13:05:06 -04:00
Piero Toffanin d9600741d1 Enforce band order normalization 2023-09-29 13:00:32 -04:00
Piero Toffanin 57c61d918d Fix start/end date on Windows 2023-09-29 12:11:19 -04:00
Piero Toffanin 7277eabd0b
Merge pull request #1697 from pierotofy/321
Compress GCP data before VLR inclusion
2023-09-08 13:56:24 -04:00
Piero Toffanin d78b8ff399 GCP file size check 2023-09-08 13:54:45 -04:00
Piero Toffanin d10bef2631 Compress GCP data before inclusion in VLR 2023-09-08 13:44:50 -04:00
Piero Toffanin 2930927207
Merge pull request #1696 from pierotofy/321
More memory efficient find_features_homography
2023-09-08 13:30:43 -04:00
Piero Toffanin 83fef16cb1 Increase max_size 2023-09-08 13:22:03 -04:00
Piero Toffanin 2fea4d9f3d More memory efficient find_features_homography 2023-09-08 13:12:52 -04:00
Piero Toffanin 50162147ce Bump version 2023-09-07 21:59:39 -04:00
Piero Toffanin 07b641dc09
Merge pull request #1695 from pierotofy/matcherfix
Fix minimum number of pictures for matcher neighbors
2023-09-06 10:15:27 -04:00
Piero Toffanin d2cd5d9336 2 --> 3 2023-09-06 10:13:36 -04:00
Piero Toffanin 340e32af8f
Merge pull request #1694 from pierotofy/matcherfix
Always use matcher-neighbors if less than 2 pictures
2023-09-06 10:11:09 -04:00
Piero Toffanin 8276751d07 Always use matcher-neighbors if less than 2 pictures 2023-09-06 10:09:14 -04:00
Piero Toffanin ebba01aad5
Merge pull request #1690 from pierotofy/mvsup
Fix ReconstructMesh segfault
2023-08-23 09:28:59 -04:00
Piero Toffanin f4549846de Fix ReconstructMesh segfault 2023-08-23 09:23:32 -04:00
Piero Toffanin f5604a05a8
Merge pull request #1689 from pierotofy/mvsup
Tower mode (OpenMVS update), fixes
2023-08-22 11:23:21 -04:00
Piero Toffanin 3fc46a1e04 Fix pc-filter 0 2023-08-21 19:59:38 +00:00
Piero Toffanin 4b8cf9af3d Upgrade OpenMVS 2023-08-21 19:42:21 +00:00
Piero Toffanin e9e18050a2
Merge pull request #1674 from Adrien-LUDWIG/median_smoothing_memory_optimization
Use windowed read/write in median_smoothing
2023-08-12 22:38:32 +02:00
Piero Toffanin 9d15982850
Merge pull request #1684 from mdchia/master
Adding README and reformatting of DJI image binner script
2023-08-07 09:42:46 +02:00
mdchia 820ea4a4e3 minor refactor for readability, add credits + README 2023-08-07 17:27:58 +10:00
Saijin-Naib e84c77dd56
Update config.py
Syntax fix for unterminated single quote
2023-07-29 01:12:54 -04:00
Stephen Mather d929d7b8fa
Update docs to reflect dem resolution defaults (#1683)
* Update docs to reflect dem resolution defaults
* Also ignore ignore-gsd, but also don't advertise it in orthophoto resolution. Replaces https://github.com/OpenDroneMap/docs/pull/176#issuecomment-1656550757
* Helpful note on GSD limit for elevation models too!
* Change ignore-gsd language to have greater clarity
2023-07-29 01:05:18 -04:00
Piero Toffanin b948109e8f
Merge pull request #1681 from sbonaime/gflags_2.2.2
Update CMakeLists.txt
2023-07-21 19:02:38 +02:00
Sebastien c3593c0f69
Update CMakeLists.txt
Fix https://github.com/OpenDroneMap/ODM/issues/1679

Update from gflags 2.1.2  Mar 24, 2015to gflags 2.2.2 from Nov 11, 2018
2023-07-21 15:43:20 +02:00
Sebastien 5a20a22a1a
Update Dev instructions (#1678)
* Update utils.py

* Update README.md

* Update README.md

Update Dev instructions

* Update README.md

* Update README.md

Update Dev instructions

* Update utils.py
2023-07-19 12:50:45 +02:00
Adrien-ANTON-LUDWIG b4aa3a9be0 Avoid using rasterio "r+" open mode (ugly patch)
When using rasterio "r+" open mode, the file is well updated while
opened but completely wrond once saved.
2023-07-17 16:36:56 +00:00
Adrien-ANTON-LUDWIG 65c20796be Use temporary files to avoid reading altered data 2023-07-17 16:15:49 +00:00
Piero Toffanin 8bc251aea2 Semantic int_values fix 2023-07-15 18:39:41 +02:00
Piero Toffanin c32a8a5c59
Merge pull request #1677 from pierotofy/rflyfix
Fix RFLY EXIF parsing
2023-07-15 12:40:57 +02:00
Piero Toffanin f75a87977e Handle malformed GPS GPSAltitudeRef tags 2023-07-15 12:37:52 +02:00
Piero Toffanin e329c9a77b
Merge pull request #1676 from rexliuser/master
update cuda ver
2023-07-15 11:12:26 +02:00
rexliuser be1fec2bd7 update cuda ver 2023-07-15 14:03:57 +08:00
Adrien-ANTON-LUDWIG 87f82a1582 Add locks to fix racing conditions 2023-07-13 11:51:13 +00:00
Adrien-ANTON-LUDWIG 9b9ba724c6 Remove forgotten exit call
Uh oh. Sorry for this.
2023-07-13 10:25:14 +00:00
Adrien-ANTON-LUDWIG ee5ff3258f Use windowed read/write in median_smoothing
See the issue description in this forum comment:
https://community.opendronemap.org/t/post-processing-after-odm/16314/16?u=adrien-anton-ludwig

TL;DR:
Median smoothing used windowing to go through the array but read it
entirely in RAM. Now the full potential of windowing is exploited to
read/write by chunks.
2023-07-12 16:55:14 +00:00
Piero Toffanin 80fd9dffdc
Merge pull request #1673 from fr-damo/master
Update rollingshutter.py
2023-07-12 16:40:33 +02:00
fr-damo df0ea97321
Update rollingshutter.py
added line 45, Autel EVO II pro
2023-07-12 18:55:59 +10:00
Piero Toffanin 967fec0974
Merge pull request #1672 from fr-damo/patch-1
Update rollingshutter.py
2023-07-07 12:01:55 +02:00
fr-damo e1b5a5ef65
Update rollingshutter.py
added line 43 'parrot anafi': 39, # Parrot Anafi
2023-07-07 08:39:10 +10:00
Piero Toffanin 8121fca607 Increase auto-boundary distance factor 2023-07-05 16:19:15 +02:00
Piero Toffanin 80c4ce517c
Merge pull request #1671 from udaf-mcq/patch-1
Update rollingshutter.py
2023-07-01 10:24:31 +02:00
udaf-mcq afd38f631d
Update rollingshutter.py 2023-06-30 13:41:31 -06:00
Piero Toffanin eb95137a4c
Merge pull request #1669 from sbonaime/master
no_ansiesc env
2023-06-23 12:58:11 +02:00
Sebastien eb4f30651e
no_ansiesc env
Environment variable no_ansiesc  disable ansiesc codes in logs
2023-06-23 10:59:36 +02:00
Piero Toffanin cefcfde07d
Merge pull request #1667 from vinsonliux/rtk-srt-parser
Added DJI Phantom4 rtk srt parsing
2023-06-18 16:59:45 +02:00
Piero Toffanin b620e4e6cc Parse RTK prefix 2023-06-18 11:27:28 +02:00
Liuxuyang 8a4a309ceb Added DJI Phantom4 rtk srt parsing 2023-06-18 08:50:34 +08:00
Piero Toffanin cfa689b5da
Merge pull request #1664 from pierotofy/flags
Keep only best SIFT features and other fixes
2023-06-12 23:51:58 +02:00
Piero Toffanin 0b8c75ca10 Fix message 2023-06-12 21:24:17 +02:00
Piero Toffanin 3a4b98a7eb Keep only best SIFT features and other fixes 2023-06-12 21:12:13 +02:00
Piero Toffanin c2ab760dd9
Merge pull request #1662 from pierotofy/exiftoolfix
Fix Exiftool Installation
2023-06-01 22:40:54 -04:00
Piero Toffanin dee9feed17 Bump version 2023-06-01 22:39:29 -04:00
Piero Toffanin 542dd6d053 Fix Exiftool 2023-06-01 22:38:23 -04:00
Piero Toffanin 5deab15e5f no need for swap space setup on self hosted runner 2023-05-31 14:47:15 -04:00
Piero Toffanin 6d37355d6b Yet tighter max tiles check 2023-05-30 19:50:31 -04:00
Piero Toffanin ba1cc39adb Tighter max_tiles check 2023-05-27 01:48:46 -04:00
Piero Toffanin 54b0ac9bb0
Merge pull request #1661 from pierotofy/imgfixes
Fix uint16 3-channel image inputs
2023-05-26 18:27:09 -04:00
Piero Toffanin 12b8f43912 Update OpenMVS 2023-05-26 14:03:09 -04:00
Piero Toffanin ad091fd9af Fix uint16 3-channel orthophoto generation 2023-05-26 13:29:17 -04:00
Piero Toffanin a2e63508c2 Check for den attribute while extracting float values from EXIFs 2023-05-26 01:26:25 -04:00
Piero Toffanin bebea18697
Merge pull request #1660 from pierotofy/checks-meta
max_tile safety check in DEM creation
2023-05-24 16:17:51 -04:00
Piero Toffanin 58c9fd2231
Merge pull request #1659 from hobu/odm_georeferencing-pdal-calls
PDAL translate call updates
2023-05-24 13:13:14 -04:00
Piero Toffanin 567cc3c872 File check fix 2023-05-24 13:09:35 -04:00
Piero Toffanin 59019dac66 Update untwine, embed GeoJSON GCPs in point cloud 2023-05-24 13:01:20 -04:00
Piero Toffanin ef1ea9a067 Merge 2023-05-24 12:29:00 -04:00
Piero Toffanin 9014912c98 record_id must be a number 2023-05-24 12:28:23 -04:00
Piero Toffanin ad100525b5 Embed GeoJSON ground control points file in point cloud outputs 2023-05-24 12:27:58 -04:00
Piero Toffanin 6ebb8b50d7 max_tile safety check in commands.create_dem 2023-05-24 11:42:18 -04:00
Piero Toffanin 8c300ab4de laszip --> lazperf 2023-05-24 10:54:48 -04:00
Piero Toffanin 609abfd115 Minor fixes, OpenDroneMap --> ODM 2023-05-24 10:42:31 -04:00
Howard Butler 607ce5ffa6
PDAL translate call updates
* add ``record_id`` to the writers.las.vlr call and change
  ``user_id`` to ``OpenDroneMap``
* use f-strings with defined precision of 0.01 for offsets
* remove use of 'lazip` ``writers.las.compression`` setting, which
  is no longer used
2023-05-24 09:07:26 -05:00
Piero Toffanin ce6c745715
Merge pull request #1657 from pierotofy/copcfix
Fix COPC export
2023-05-22 21:11:34 -04:00
Piero Toffanin 4dd4da20c3 Bump version 2023-05-22 18:22:01 -04:00
Piero Toffanin adc0570c53 Fix COPC export 2023-05-22 18:21:52 -04:00
Piero Toffanin 552b45bce4
Merge pull request #1656 from pierotofy/order
Minor PDF report fixes
2023-05-22 17:34:27 -04:00
Piero Toffanin 27abb8bb10 Minor report fixes 2023-05-22 12:37:33 -04:00
Piero Toffanin 05e8323174
Merge pull request #1655 from pierotofy/order
Lower blur threshold
2023-05-22 11:03:13 -04:00
Piero Toffanin f172e91b7e Lower blur threshold 2023-05-22 11:01:19 -04:00
Piero Toffanin ed07b18bad
Merge pull request #1654 from pierotofy/order
Add --matcher-order
2023-05-19 19:55:31 -04:00
Piero Toffanin 3535c64347 Update description 2023-05-19 15:21:22 -04:00
Piero Toffanin b076b667a4 Merge branch 'rexliuser/master' into order 2023-05-19 15:17:35 -04:00
Piero Toffanin 8e735e01d3 Rename option, misc refactor 2023-05-19 15:14:44 -04:00
Piero Toffanin 396dde0d2c
Merge pull request #1653 from pierotofy/flir
Adds support for Zenmuse XT Thermal images
2023-05-19 14:14:04 -04:00
Piero Toffanin 4c7c37bbd4 Bump version 2023-05-19 12:26:53 -04:00
Piero Toffanin 182bcfa68f Properly label single thermal band 2023-05-19 11:21:56 -04:00
Piero Toffanin 5db0d0111d Add External-ExifTool.cmake 2023-05-18 15:26:53 -04:00
Piero Toffanin 80e4b4d649 Zenmuse XT support 2023-05-18 15:26:34 -04:00
rexliuser 4a26aa1c9c added argument: matcher-order-neighbors 2023-05-17 15:18:24 +08:00
Piero Toffanin a922aaecbc
Merge pull request #1649 from smathermather/proj4-guidance
Update proj4 string guidance
2023-05-15 00:05:43 -04:00
Stephen Mather 7be148a90a
spelling is hard 2023-05-14 20:20:02 -04:00
Stephen Mather 3f1975b353
Update proj4 string guidance 2023-05-14 20:18:30 -04:00
Piero Toffanin b8965b50db
Merge pull request #1648 from pierotofy/orthogdal
Direct odm_orthophoto georeferencing
2023-05-08 14:20:41 -04:00
Piero Toffanin ffad2b02e8 Bump version 2023-05-08 10:45:06 -04:00
Piero Toffanin 1ae7974019 Direct odm_orthophoto georeferencing 2023-05-08 10:43:56 -04:00
Piero Toffanin c0d5e21d38
Merge pull request #1645 from aeburriel/patch-1
DJI Mavic2 Enterprise Advanced & Zenmuse Z30 RSC
2023-05-04 15:50:12 -04:00
Piero Toffanin f82b6a1f82
Merge pull request #1646 from aeburriel/patch-2
Update DJI Mavic Mini v1 RSC
2023-05-04 15:49:47 -04:00
Antonio Eugenio Burriel ca7abe165a
Update DJI Mavic Mini v1 RSC
This drone has different rolling shutter timings for 4/3 and 16:9 ratios.
2023-05-04 18:58:32 +02:00
Antonio Eugenio Burriel 0f595cab80
DJI Mavic2 Enterprise Advanced & Zenmuse Z30 RSC 2023-05-04 17:48:38 +02:00
Piero Toffanin d340d8601d
Merge pull request #1643 from smathermather/dji-band
Via Australian Plant Phenomics Facility
2023-05-02 17:33:54 -04:00
Stephen Mather 14048cc049
Via Australian Plant Phenomics Facility
via https://gitlab.com/-/snippets/2493855

discussion here:
https://community.opendronemap.org/t/code-snippet-for-naming-dji-phantom-4-multispectral-images-to-work-better-with-odm/14678
2023-05-02 13:33:41 -04:00
Piero Toffanin f7c87172e9
Merge pull request #1641 from pierotofy/partialm
Add --sfm-no-partial
2023-05-01 17:04:12 -04:00
Piero Toffanin c34f227157
Merge pull request #1640 from pierotofy/inpaint
Edge Inpainting
2023-05-01 17:01:46 -04:00
Piero Toffanin 7aade078ad Add --sfm-no-partial 2023-05-01 16:56:49 -04:00
Piero Toffanin ac89d2212e Georef dataset check 2023-04-29 11:10:16 -04:00
Piero Toffanin cdf876a46b
Merge pull request #1639 from pierotofy/dem2meshup
More resiliant edge collapses
2023-04-28 13:59:42 -04:00
Piero Toffanin 8c0e1b3173 RGB edge inpainting 2023-04-28 12:57:27 -04:00
Piero Toffanin f27b611c43 More resiliant edge collapse 2023-04-28 10:32:43 -04:00
Piero Toffanin e736670094
Merge pull request #1638 from pierotofy/npctile
Automatic pc-tile (remove --pc-tile)
2023-04-24 05:18:29 -04:00
Piero Toffanin f8cd626ae8 Bump version 2023-04-23 17:07:36 -04:00
Piero Toffanin 21e9df61f7 Automatic pc-tile 2023-04-23 16:29:08 -04:00
Piero Toffanin 2c8780c4d1
Merge pull request #1636 from pierotofy/amfixes
Align textured models for all bands in multispectral processing
2023-04-21 15:21:51 -04:00
Yunpeng Li 1ea2a990e5 Align textured models for all bands in multispectral processing 2023-04-21 12:54:33 -04:00
Piero Toffanin 706221c626
Merge pull request #1634 from pierotofy/lshidx
FLANN LSH Indexing for binary descriptors
2023-04-19 16:27:21 -04:00
Piero Toffanin 02570ed632 FLANN LSH Indexing for binary descriptors 2023-04-19 13:17:20 -04:00
Piero Toffanin 7048dd86fd
Merge pull request #1632 from pierotofy/sequoia
Add  camera-lens fisheye_opencv, better support for Parrot Sequoia
2023-04-17 16:09:39 -04:00
Piero Toffanin bd0f33f978 Add camera-lens fisheye_opencv, better support for Parrot Sequoia 2023-04-17 12:04:40 -04:00
59 zmienionych plików z 1272 dodań i 965 usunięć

Wyświetl plik

@ -0,0 +1,33 @@
name: Issue Triage
on:
issues:
types:
- opened
jobs:
issue_triage:
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- uses: pierotofy/issuewhiz@v1
with:
ghToken: ${{ secrets.GITHUB_TOKEN }}
openAI: ${{ secrets.OPENAI_TOKEN }}
filter: |
- "#"
variables: |
- Q: "A question about using a software or seeking guidance on doing something?"
- B: "Reporting an issue or a software bug?"
- P: "Describes an issue with processing a set of images or a particular dataset?"
- D: "Contains a link to a dataset or images?"
- E: "Contains a suggestion for an improvement or a feature request?"
- SC: "Describes an issue related to compiling or building source code?"
logic: |
- 'Q and (not B) and (not P) and (not E) and (not SC) and not (title_lowercase ~= ".*bug: .+")': [comment: "Could we move this conversation over to the forum at https://community.opendronemap.org? The forum is the right place to ask questions (we try to keep the GitHub issue tracker for feature requests and bugs only). Thank you!", close: true, stop: true]
- "B and (not P) and (not E) and (not SC)": [label: "software fault", stop: true]
- "P and D": [label: "possible software fault", stop: true]
- "P and (not D) and (not SC) and (not E)": [comment: "Thanks for the report, but it looks like you didn't include a copy of your dataset for us to reproduce this issue? Please make sure to follow our [issue guidelines](https://github.com/OpenDroneMap/ODM/blob/master/docs/issue_template.md) :pray: ", close: true, stop: true]
- "E": [label: enhancement, stop: true]
- "SC": [label: "possible software fault"]
signature: "p.s. I'm just an automated script, not a human being."

Wyświetl plik

@ -1,98 +0,0 @@
name: Publish Docker and WSL Images
on:
push:
branches:
- master
tags:
- v*
jobs:
build:
runs-on: self-hosted
timeout-minutes: 2880
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set Swap Space
uses: pierotofy/set-swap-space@master
with:
swap-size-gb: 12
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
with:
config-inline: |
[worker.oci]
max-parallelism = 1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Use the repository information of the checked-out code to format docker tags
- name: Docker meta
id: docker_meta
uses: crazy-max/ghaction-docker-meta@v1
with:
images: opendronemap/odm
tag-semver: |
{{version}}
- name: Build and push Docker image
id: docker_build
uses: docker/build-push-action@v2
with:
file: ./portable.Dockerfile
platforms: linux/amd64,linux/arm64
push: true
no-cache: true
tags: |
${{ steps.docker_meta.outputs.tags }}
opendronemap/odm:latest
- name: Export WSL image
id: wsl_export
run: |
docker pull opendronemap/odm
docker export $(docker create opendronemap/odm) --output odm-wsl-rootfs-amd64.tar.gz
gzip odm-wsl-rootfs-amd64.tar.gz
echo ::set-output name=amd64-rootfs::"odm-wsl-rootfs-amd64.tar.gz"
# Convert tag into a GitHub Release if we're building a tag
- name: Create Release
if: github.event_name == 'tag'
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: Release ${{ github.ref }}
draft: false
prerelease: false
# Upload the WSL image to the new Release if we're building a tag
- name: Upload amd64 Release Asset
if: github.event_name == 'tag'
id: upload-amd64-wsl-rootfs
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: ./${{ steps.wsl_export.outputs.amd64-rootfs }}
asset_name: ${{ steps.wsl_export.outputs.amd64-rootfs }}
asset_content_type: application/gzip
# Always archive the WSL rootfs
- name: Upload amd64 Artifact
uses: actions/upload-artifact@v2
with:
name: wsl-rootfs
path: ${{ steps.wsl_export.outputs.amd64-rootfs }}
- name: Docker image digest and WSL rootfs download URL
run: |
echo "Docker image digest: ${{ steps.docker_build.outputs.digest }}"
echo "WSL AMD64 rootfs URL: ${{ steps.upload-amd64-wsl-rootfs.browser_download_url }}"
# Trigger NodeODM build
- name: Dispatch NodeODM Build Event
id: nodeodm_dispatch
run: |
curl -X POST -u "${{secrets.PAT_USERNAME}}:${{secrets.PAT_TOKEN}}" -H "Accept: application/vnd.github.everest-preview+json" -H "Content-Type: application/json" https://api.github.com/repos/OpenDroneMap/NodeODM/actions/workflows/publish-docker.yaml/dispatches --data '{"ref": "master"}'

Wyświetl plik

@ -9,14 +9,11 @@ on:
jobs:
build:
runs-on: ubuntu-latest
runs-on: self-hosted
timeout-minutes: 2880
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set Swap Space
uses: pierotofy/set-swap-space@master
with:
swap-size-gb: 12
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx

Wyświetl plik

@ -0,0 +1,53 @@
name: Publish Docker and WSL Images
on:
push:
branches:
- master
tags:
- v*
jobs:
build:
runs-on: self-hosted
timeout-minutes: 2880
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
with:
config-inline: |
[worker.oci]
max-parallelism = 1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Use the repository information of the checked-out code to format docker tags
- name: Docker meta
id: docker_meta
uses: crazy-max/ghaction-docker-meta@v1
with:
images: opendronemap/odm
tag-semver: |
{{version}}
- name: Build and push Docker image
id: docker_build
uses: docker/build-push-action@v2
with:
file: ./portable.Dockerfile
platforms: linux/amd64,linux/arm64
push: true
no-cache: true
tags: |
${{ steps.docker_meta.outputs.tags }}
opendronemap/odm:latest
# Trigger NodeODM build
- name: Dispatch NodeODM Build Event
id: nodeodm_dispatch
run: |
curl -X POST -u "${{secrets.PAT_USERNAME}}:${{secrets.PAT_TOKEN}}" -H "Accept: application/vnd.github.everest-preview+json" -H "Content-Type: application/json" https://api.github.com/repos/OpenDroneMap/NodeODM/actions/workflows/publish-docker.yaml/dispatches --data '{"ref": "master"}'

Wyświetl plik

@ -1,51 +0,0 @@
name: Publish Snap
on:
push:
branches:
- master
tags:
- v**
jobs:
build-and-release:
runs-on: ubuntu-latest
strategy:
matrix:
architecture:
- amd64
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set Swap Space
uses: pierotofy/set-swap-space@master
with:
swap-size-gb: 12
- name: Build
id: build
uses: diddlesnaps/snapcraft-multiarch-action@v1
with:
architecture: ${{ matrix.architecture }}
- name: Publish unstable builds to Edge
if: github.ref == 'refs/heads/master'
uses: snapcore/action-publish@v1
with:
store_login: ${{ secrets.STORE_LOGIN }}
snap: ${{ steps.build.outputs.snap }}
release: edge
- name: Publish tagged prerelease builds to Beta
# These are identified by having a hyphen in the tag name, e.g.: v1.0.0-beta1
if: startsWith(github.ref, 'refs/tags/v') && contains(github.ref, '-')
uses: snapcore/action-publish@v1
with:
store_login: ${{ secrets.STORE_LOGIN }}
snap: ${{ steps.build.outputs.snap }}
release: beta
- name: Publish tagged stable or release-candidate builds to Candidate
# These are identified by NOT having a hyphen in the tag name, OR having "-RC" or "-rc" in the tag name.
if: startsWith(github.ref, 'refs/tags/v1') && ( ( ! contains(github.ref, '-') ) || contains(github.ref, '-RC') || contains(github.ref, '-rc') )
uses: snapcore/action-publish@v1
with:
store_login: ${{ secrets.STORE_LOGIN }}
snap: ${{ steps.build.outputs.snap }}
release: candidate

Wyświetl plik

@ -83,30 +83,6 @@ ODM can be installed natively on Windows. Just download the latest setup from th
run C:\Users\youruser\datasets\project [--additional --parameters --here]
```
## Snap Package
ODM is now available as a Snap Package from the Snap Store. To install you may use the Snap Store (available itself as a Snap Package) or the command line:
```bash
sudo snap install --edge opendronemap
```
To run, you will need a terminal window into which you can type:
```bash
opendronemap
# or
snap run opendronemap
# or
/snap/bin/opendronemap
```
Snap packages will be kept up-to-date automatically, so you don't need to update ODM manually.
## GPU Acceleration
ODM has support for doing SIFT feature extraction on a GPU, which is about 2x faster than the CPU on a typical consumer laptop. To use this feature, you need to use the `opendronemap/odm:gpu` docker image instead of `opendronemap/odm` and you need to pass the `--gpus all` flag:
@ -147,52 +123,6 @@ You're in good shape!
See https://github.com/NVIDIA/nvidia-docker and https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker for information on docker/NVIDIA setup.
## WSL or WSL2 Install
Note: This requires that you have installed WSL already by following [the instructions on Microsoft's Website](https://docs.microsoft.com/en-us/windows/wsl/install-win10).
You can run ODM via WSL or WSL2 by downloading the `rootfs.tar.gz` file from [the releases page on GitHub](https://github.com/OpenDroneMap/ODM/releases). Once you have the file saved to your `Downloads` folder in Windows, open a PowerShell or CMD window by right-clicking the Flag Menu (bottom left by default) and selecting "Windows PowerShell", or alternatively by using the [Windows Terminal from the Windows Store](https://www.microsoft.com/store/productId/9N0DX20HK701).
Inside a PowerShell window, or Windows Terminal running PowerShell, type the following:
```powershell
# PowerShell
wsl.exe --import ODM $env:APPDATA\ODM C:\path\to\your\Downloads\rootfs.tar.gz
```
Alternatively if you're using `CMD.exe` or the `CMD` support in Windows Terminal type:
```cmd
# CMD
wsl.exe --import ODM %APPDATA%\ODM C:\path\to\your\Downloads\rootfs.tar.gz
```
In either case, make sure you replace `C:\path\to\your\Downloads\rootfs.tar.gz` with the actual path to your `rootfs.tar.gz` file.
This will save a new Hard Disk image to your Windows `AppData` folder at `C:\Users\username\AppData\roaming\ODM` (where `username` is your Username in Windows), and will set-up a new WSL "distro" called `ODM`.
You may start the ODM distro by using the relevant option in the Windows Terminal (from the Windows Store) or by executing `wsl.exe -d ODM` in a PowerShell or CMD window.
ODM is installed to the distro's `/code` directory. You may execute it with:
```bash
/code/run.sh
```
### Updating ODM in WSL
The easiest way to update the installation of ODM is to download the new `rootfs.tar.gz` file and import it as another distro. You may then unregister the original instance the same way you delete ODM from WSL (see next heading).
### Deleting an ODM in WSL instance
```cmd
wsl.exe --unregister ODM
```
Finally you'll want to delete the files by using your Windows File Manager (Explorer) to navigate to `%APPDATA%`, find the `ODM` directory, and delete it by dragging it to the recycle bin. To permanently delete it empty the recycle bin.
If you have installed to a different directory by changing the `--import` command you ran to install you must use that directory name to delete the correct files. This is likely the case if you have multiple ODM installations or are updating an already-installed installation.
## Native Install (Ubuntu 21.04)
You can run ODM natively on Ubuntu 21.04 (although we don't recommend it):
@ -267,6 +197,8 @@ Starting from version 3.0.4, ODM can automatically extract images from video fil
Help improve our software! We welcome contributions from everyone, whether to add new features, improve speed, fix existing bugs or add support for more cameras. Check our [code of conduct](https://github.com/OpenDroneMap/documents/blob/master/CONDUCT.md), the [contributing guidelines](https://github.com/OpenDroneMap/documents/blob/master/CONTRIBUTING.md) and [how decisions are made](https://github.com/OpenDroneMap/documents/blob/master/GOVERNANCE.md#how-decisions-are-made).
### Installation and first run
For Linux users, the easiest way to modify the software is to make sure docker is installed, clone the repository and then run from a shell:
```bash
@ -285,6 +217,18 @@ You can now make changes to the ODM source. When you are ready to test the chang
```bash
(odmdev) [user:/code] master+* ± ./run.sh --project-path /datasets mydataset
```
### Stop dev container
```bash
docker stop odmdev
```
### To come back to dev environement
change your_username to your username
```bash
docker start odmdev
docker exec -ti odmdev bash
su your_username
```
If you have questions, join the developer's chat at https://community.opendronemap.org/c/developers-chat/21

Wyświetl plik

@ -142,7 +142,7 @@ SETUP_EXTERNAL_PROJECT(OpenCV ${ODM_OpenCV_Version} ${ODM_BUILD_OpenCV})
# ---------------------------------------------------------------------------------------------
# Google Flags library (GFlags)
#
set(ODM_GFlags_Version 2.1.2)
set(ODM_GFlags_Version 2.2.2)
option(ODM_BUILD_GFlags "Force to build GFlags library" OFF)
SETUP_EXTERNAL_PROJECT(GFlags ${ODM_GFlags_Version} ${ODM_BUILD_GFlags})
@ -178,6 +178,8 @@ set(custom_libs OpenSfM
PyPopsift
Obj2Tiles
OpenPointClass
ExifTool
RenderDEM
)
externalproject_add(mve
@ -221,7 +223,7 @@ externalproject_add(poissonrecon
externalproject_add(dem2mesh
GIT_REPOSITORY https://github.com/OpenDroneMap/dem2mesh.git
GIT_TAG 300
GIT_TAG 334
PREFIX ${SB_BINARY_DIR}/dem2mesh
SOURCE_DIR ${SB_SOURCE_DIR}/dem2mesh
CMAKE_ARGS -DCMAKE_INSTALL_PREFIX:PATH=${SB_INSTALL_DIR}
@ -242,13 +244,22 @@ externalproject_add(dem2points
externalproject_add(odm_orthophoto
DEPENDS opencv
GIT_REPOSITORY https://github.com/OpenDroneMap/odm_orthophoto.git
GIT_TAG 290
GIT_TAG 317
PREFIX ${SB_BINARY_DIR}/odm_orthophoto
SOURCE_DIR ${SB_SOURCE_DIR}/odm_orthophoto
CMAKE_ARGS -DCMAKE_INSTALL_PREFIX:PATH=${SB_INSTALL_DIR}
${WIN32_CMAKE_ARGS} ${WIN32_GDAL_ARGS}
)
externalproject_add(fastrasterfilter
GIT_REPOSITORY https://github.com/OpenDroneMap/FastRasterFilter.git
GIT_TAG main
PREFIX ${SB_BINARY_DIR}/fastrasterfilter
SOURCE_DIR ${SB_SOURCE_DIR}/fastrasterfilter
CMAKE_ARGS -DCMAKE_INSTALL_PREFIX:PATH=${SB_INSTALL_DIR}
${WIN32_CMAKE_ARGS} ${WIN32_GDAL_ARGS}
)
externalproject_add(lastools
GIT_REPOSITORY https://github.com/OpenDroneMap/LAStools.git
GIT_TAG 250

Wyświetl plik

@ -0,0 +1,38 @@
set(_proj_name exiftool)
set(_SB_BINARY_DIR "${SB_BINARY_DIR}/${_proj_name}")
if (WIN32)
ExternalProject_Add(${_proj_name}
PREFIX ${_SB_BINARY_DIR}
TMP_DIR ${_SB_BINARY_DIR}/tmp
STAMP_DIR ${_SB_BINARY_DIR}/stamp
#--Download step--------------
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
URL https://github.com/OpenDroneMap/windows-deps/releases/download/2.5.0/exiftool.zip
SOURCE_DIR ${SB_SOURCE_DIR}/${_proj_name}
UPDATE_COMMAND ""
CONFIGURE_COMMAND ""
BUILD_IN_SOURCE 1
BUILD_COMMAND ""
INSTALL_COMMAND ${CMAKE_COMMAND} -E copy ${SB_SOURCE_DIR}/${_proj_name}/exiftool.exe ${SB_INSTALL_DIR}/bin
#--Output logging-------------
LOG_DOWNLOAD OFF
LOG_CONFIGURE OFF
LOG_BUILD OFF
)
else()
externalproject_add(${_proj_name}
PREFIX ${_SB_BINARY_DIR}
TMP_DIR ${_SB_BINARY_DIR}/tmp
STAMP_DIR ${_SB_BINARY_DIR}/stamp
SOURCE_DIR ${SB_SOURCE_DIR}/${_proj_name}
#--Download step--------------
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
URL https://github.com/exiftool/exiftool/archive/refs/tags/12.62.zip
UPDATE_COMMAND ""
CONFIGURE_COMMAND ""
BUILD_IN_SOURCE 1
BUILD_COMMAND perl Makefile.PL PREFIX=${SB_INSTALL_DIR} LIB=${SB_INSTALL_DIR}/bin/lib
INSTALL_COMMAND make install && rm -fr ${SB_INSTALL_DIR}/man
)
endif()

Wyświetl plik

@ -8,7 +8,7 @@ ExternalProject_Add(${_proj_name}
#--Download step--------------
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
GIT_REPOSITORY https://github.com/OpenDroneMap/FPCFilter
GIT_TAG 305
GIT_TAG 331
#--Update/Patch step----------
UPDATE_COMMAND ""
#--Configure step-------------

Wyświetl plik

@ -14,7 +14,7 @@ externalproject_add(vcg
externalproject_add(eigen34
GIT_REPOSITORY https://gitlab.com/libeigen/eigen.git
GIT_TAG 3.4
GIT_TAG 7176ae16238ded7fb5ed30a7f5215825b3abd134
UPDATE_COMMAND ""
SOURCE_DIR ${SB_SOURCE_DIR}/eigen34
CONFIGURE_COMMAND ""
@ -53,7 +53,7 @@ ExternalProject_Add(${_proj_name}
#--Download step--------------
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
GIT_REPOSITORY https://github.com/OpenDroneMap/openMVS
GIT_TAG 301
GIT_TAG 320
#--Update/Patch step----------
UPDATE_COMMAND ""
#--Configure step-------------

Wyświetl plik

@ -25,7 +25,7 @@ ExternalProject_Add(${_proj_name}
#--Download step--------------
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
GIT_REPOSITORY https://github.com/OpenDroneMap/OpenSfM/
GIT_TAG 305
GIT_TAG 330
#--Update/Patch step----------
UPDATE_COMMAND git submodule update --init --recursive
#--Configure step-------------

Wyświetl plik

@ -16,7 +16,7 @@ ExternalProject_Add(${_proj_name}
STAMP_DIR ${_SB_BINARY_DIR}/stamp
#--Download step--------------
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
URL https://github.com/PDAL/PDAL/archive/refs/tags/2.4.3.zip
URL https://github.com/OpenDroneMap/PDAL/archive/refs/heads/333.zip
#--Update/Patch step----------
UPDATE_COMMAND ""
#--Configure step-------------

Wyświetl plik

@ -0,0 +1,30 @@
set(_proj_name renderdem)
set(_SB_BINARY_DIR "${SB_BINARY_DIR}/${_proj_name}")
ExternalProject_Add(${_proj_name}
DEPENDS pdal
PREFIX ${_SB_BINARY_DIR}
TMP_DIR ${_SB_BINARY_DIR}/tmp
STAMP_DIR ${_SB_BINARY_DIR}/stamp
#--Download step--------------
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
GIT_REPOSITORY https://github.com/OpenDroneMap/RenderDEM
GIT_TAG main
#--Update/Patch step----------
UPDATE_COMMAND ""
#--Configure step-------------
SOURCE_DIR ${SB_SOURCE_DIR}/${_proj_name}
CMAKE_ARGS
-DPDAL_DIR=${SB_INSTALL_DIR}/lib/cmake/PDAL
-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}
-DCMAKE_INSTALL_PREFIX:PATH=${SB_INSTALL_DIR}
${WIN32_CMAKE_ARGS}
#--Build step-----------------
BINARY_DIR ${_SB_BINARY_DIR}
#--Install step---------------
INSTALL_DIR ${SB_INSTALL_DIR}
#--Output logging-------------
LOG_DOWNLOAD OFF
LOG_CONFIGURE OFF
LOG_BUILD OFF
)

Wyświetl plik

@ -9,7 +9,7 @@ ExternalProject_Add(${_proj_name}
#--Download step--------------
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
GIT_REPOSITORY https://github.com/OpenDroneMap/untwine/
GIT_TAG 285
GIT_TAG 317
#--Update/Patch step----------
UPDATE_COMMAND ""
#--Configure step-------------

Wyświetl plik

@ -1 +1 @@
3.1.1
3.5.1

Wyświetl plik

@ -127,6 +127,9 @@ installreqs() {
installdepsfromsnapcraft build openmvs
set -e
# edt requires numpy to build
pip install --ignore-installed numpy==1.23.1
pip install --ignore-installed -r requirements.txt
#if [ ! -z "$GPU_INSTALL" ]; then
#fi

Wyświetl plik

@ -0,0 +1,26 @@
# exif_binner.py
Bins multispectral drone images by spectral band, using EXIF data. Also verifies that each bin is complete (i.e. contains all expected bands) and can log errors to a CSV file. Excludes RGB images by default.
## Requirements
- [Pillow](https://pillow.readthedocs.io/en/stable/installation.html) library for reading images and EXIF data.
- [tqdm](https://github.com/tqdm/tqdm#installation) for progress bars - can be removed
## Usage
```
exif_binner.py <args> <path to folder of images to rename> <output folder>
```
Optional arguments:
- `-b`/`--bands <integer>`: Number of expected bands per capture. Default: `5`
- `-s`/`--sequential <True/False>`: Use sequential capture group in filenames rather than original capture ID. Default: `True`
- `-z`/`--zero_pad <integer>`: If using sequential capture groups, zero-pad the group number to this many digits. 0 for no padding, -1 for auto padding. Default: `5`
- `-w`/`--whitespace_replace <string>`: Replace whitespace characters with this character. Default: `-`
- `-l`/`--logfile <filename>`: Write processed image metadata to this CSV file
- `-r`/`--replace_filename <string>`: Use this instead of using the original filename in new filenames.
- `-f`/`--force`: Do not ask for processing confirmation.
- `-g`/`--no_grouping`: Do not apply grouping, only validate and add band name.
- Show these on the command line with `-h`/`--help`.

Wyświetl plik

@ -0,0 +1,210 @@
#!/usr/bin/env python3
# Originally developed by Ming Chia at the Australian Plant Phenomics Facility (Australian National University node)
# Usage:
# exif_binner.py <args> <path to folder of images to rename> <output folder>
# standard libraries
import sys
import os
import shutil
import re
import csv
import math
import argparse
# other imports
import PIL
from PIL import Image, ExifTags
from tqdm import tqdm # optional: see "swap with this for no tqdm" below
parser = argparse.ArgumentParser()
# required args
parser.add_argument("file_dir", help="input folder of images")
parser.add_argument("output_dir", help="output folder to copy images to")
# args with defaults
parser.add_argument("-b", "--bands", help="number of expected bands per capture", type=int, default=5)
parser.add_argument("-s", "--sequential", help="use sequential capture group in filenames rather than original capture ID", type=bool, default=True)
parser.add_argument("-z", "--zero_pad", help="if using sequential capture groups, zero-pad the group number to this many digits. 0 for no padding, -1 for auto padding", type=int, default=5)
parser.add_argument("-w", "--whitespace_replace", help="replace whitespace characters with this character", type=str, default="-")
# optional args no defaults
parser.add_argument("-l", "--logfile", help="write image metadata used to this CSV file", type=str)
parser.add_argument("-r", "--replace_filename", help="use this instead of using the original filename in new filenames", type=str)
parser.add_argument("-f", "--force", help="don't ask for confirmation", action="store_true")
parser.add_argument("-g", "--no_grouping", help="do not apply grouping, only validate and add band name", action="store_true")
args = parser.parse_args()
file_dir = args.file_dir
output_dir = args.output_dir
replacement_character = args.whitespace_replace
expected_bands = args.bands
logfile = args.logfile
output_valid = os.path.join(output_dir, "valid")
output_invalid = os.path.join(output_dir, "invalid")
file_count = len(os.listdir(file_dir))
auto_zero_pad = len(str(math.ceil(float(file_count) / float(expected_bands))))
if args.zero_pad >= 1:
if int("9" * args.zero_pad) < math.ceil(float(file_count) / float(expected_bands)):
raise ValueError("Zero pad must have more digits than maximum capture groups! Attempted to pad " + str(args.zero_pad) + " digits with "
+ str(file_count) + " files and " + str(expected_bands) + " bands (up to " + str(math.ceil(float(file_count) / float(expected_bands)))
+ " capture groups possible, try at least " + str(auto_zero_pad) + " digits to zero pad)")
if args.force is False:
print("Input dir: " + str(file_dir) + " (" + str(file_count) + " files)")
print("Output folder: " + str(output_dir))
if args.replace_filename:
print("Replacing all basic filenames with: " + args.replace_filename)
else:
print("Replace whitespace in filenames with: " + replacement_character)
print("Number of expected bands: " + str(expected_bands))
if logfile:
print("Save image processing metadata to: " + logfile)
confirmation = input("Confirm processing [Y/N]: ")
if confirmation.lower() in ["y"]:
pass
else:
sys.exit()
no_exif_n = 0
images = []
print("Indexing images ...")
# for filename in os.listdir(file_dir): # swap with this for no tqdm
for filename in tqdm(os.listdir(file_dir)):
old_path = os.path.join(file_dir, filename)
file_name, file_ext = os.path.splitext(filename)
image_entry = {"name": filename, "valid": True, "band": "-", "ID": "-", "group": 0, "DateTime": "-", "error": "-"} # dashes to ensure CSV exports properly, can be blank
try:
img = Image.open(old_path)
except PIL.UnidentifiedImageError as img_err:
# if it tries importing a file it can't read as an image
# uncomment to print errors
# sys.stderr.write(str(img_err) + "\n")
no_exif_n += 1
if logfile:
image_entry["valid"] = False
image_entry["error"] = "Not readable as image: " + str(img_err)
images.append(image_entry)
continue
for key, val in img.getexif().items():
if key in ExifTags.TAGS:
# print(ExifTags.TAGS[key] + ":" + str(val)) # debugging
if ExifTags.TAGS[key] == "XMLPacket":
# find bandname
bandname_start = val.find(b'<Camera:BandName>')
bandname_end = val.find(b'</Camera:BandName>')
bandname_coded = val[(bandname_start + 17):bandname_end]
bandname = bandname_coded.decode("UTF-8")
image_entry["band"] = str(bandname)
# find capture ID
image_entry["ID"] = re.findall('CaptureUUID="([^"]*)"', str(val))[0]
if ExifTags.TAGS[key] == "DateTime":
image_entry["DateTime"] = str(val)
image_entry["band"].replace(" ", "-")
if len(image_entry["band"]) >= 99: # if it's too long, wrong value (RGB pic has none)
# no exif present
no_exif_n += 1
image_entry["valid"] = False
image_entry["error"] = "Image band name appears to be too long"
elif image_entry["ID"] == "" and expected_bands > 1:
no_exif_n += 1
image_entry["valid"] = False
image_entry["error"] = "No Capture ID found"
if (file_ext.lower() in [".jpg", ".jpeg"]) and (image_entry["band"] == "-"): # hack for DJI RGB jpgs
# handle = open(old_path, 'rb').read()
# xmp_start = handle.find(b'<x:xmpmeta')
# xmp_end = handle.find(b'</x:xmpmeta')
# xmp_bit = handle[xmp_start:xmp_end + 12]
# image_entry["ID"] = re.findall('CaptureUUID="([^"]*)"', str(xmp_bit))[0]
# image_entry["band"] = "RGB" # TODO: we assume this. may not hold true for all datasets
no_exif_n += 1 # this is just to keep a separate invalid message, comment out this whole if block and the jpgs shoud be handled by the "no capture ID" case
image_entry["valid"] = False
image_entry["error"] = "RGB jpg, not counting for multispec processing"
images.append(image_entry)
# print(new_path) # debugging
print(str(no_exif_n) + " files were not multispectral images")
no_matching_bands_n = 0
new_capture_id = 1
capture_ids = {}
images = sorted(images, key=lambda img: (img["DateTime"], img["name"]))
# now sort and identify valid entries
if not args.no_grouping:
# for this_img in images: # swap with this for no tqdm
for this_img in tqdm(images):
if not this_img["valid"]: # prefiltered in last loop
continue
same_id_images = [image for image in images if image["ID"] == this_img["ID"]]
if len(same_id_images) != expected_bands: # defaults to True, so only need to filter out not in
no_matching_bands_n += 1
this_img["valid"] = False
this_img["error"] = "Capture ID has too few/too many bands"
else:
if this_img["ID"] in capture_ids.keys():
this_img["group"] = capture_ids[this_img["ID"]]
else:
capture_ids[this_img["ID"]] = new_capture_id
this_img["group"] = capture_ids[this_img["ID"]] # a little less efficient but we know it works this way
new_capture_id += 1
print(str(no_matching_bands_n) + " images had unexpected bands in same capture")
os.makedirs(output_valid, exist_ok=True)
os.makedirs(output_invalid, exist_ok=True)
identifier = ""
# then do the actual copy
# for this_img in images: # swap with this for no tqdm
for this_img in tqdm(images):
old_path = os.path.join(file_dir, this_img["name"])
file_name, file_ext = os.path.splitext(this_img["name"])
if args.whitespace_replace:
file_name = replacement_character.join(file_name.split())
if args.replace_filename and not args.no_grouping:
file_name = args.replace_filename
if this_img["valid"]:
prefix = output_valid
if args.no_grouping:
file_name_full = file_name + "-" + this_img["band"] + file_ext
else:
# set ID based on args
if args.sequential:
if args.zero_pad == 0:
identifier = str(this_img["group"])
elif args.zero_pad == -1:
identifier = str(this_img["group"]).zfill(auto_zero_pad)
else:
identifier = str(this_img["group"]).zfill(args.zero_pad)
else:
identifier = this_img["ID"]
file_name_full = identifier + "-" + file_name + "-" + this_img["band"] + file_ext
else:
prefix = output_invalid
file_name_full = file_name + file_ext
new_path = os.path.join(prefix, file_name_full)
shutil.copy(old_path, new_path)
if logfile:
header = images[0].keys()
with open(logfile, 'w', newline='') as logfile_handle:
dict_writer = csv.DictWriter(logfile_handle, header)
dict_writer.writeheader()
dict_writer.writerows(images)
print("Done!")

Wyświetl plik

@ -51,6 +51,5 @@ commands.create_dem(args.point_cloud,
outdir=outdir,
resolution=args.resolution,
decimation=1,
max_workers=multiprocessing.cpu_count(),
keep_unfilled_copy=False
max_workers=multiprocessing.cpu_count()
)

Wyświetl plik

@ -1,4 +1,4 @@
FROM nvidia/cuda:11.2.0-devel-ubuntu20.04 AS builder
FROM nvidia/cuda:11.2.2-devel-ubuntu20.04 AS builder
# Env variables
ENV DEBIAN_FRONTEND=noninteractive \
@ -21,7 +21,7 @@ RUN bash configure.sh clean
### Use a second image for the final asset to reduce the number and
# size of the layers.
FROM nvidia/cuda:11.2.0-runtime-ubuntu20.04
FROM nvidia/cuda:11.2.2-runtime-ubuntu20.04
#FROM nvidia/cuda:11.2.0-devel-ubuntu20.04
# Env variables

Wyświetl plik

@ -0,0 +1,76 @@
from opendm import log
from shlex import _find_unsafe
import json
import os
def double_quote(s):
"""Return a shell-escaped version of the string *s*."""
if not s:
return '""'
if _find_unsafe(s) is None:
return s
# use double quotes, and prefix double quotes with a \
# the string $"b is then quoted as "$\"b"
return '"' + s.replace('"', '\\\"') + '"'
def args_to_dict(args):
args_dict = vars(args)
result = {}
for k in sorted(args_dict.keys()):
# Skip _is_set keys
if k.endswith("_is_set"):
continue
# Don't leak token
if k == 'sm_cluster' and args_dict[k] is not None:
result[k] = True
else:
result[k] = args_dict[k]
return result
def save_opts(opts_json, args):
try:
with open(opts_json, "w", encoding='utf-8') as f:
f.write(json.dumps(args_to_dict(args)))
except Exception as e:
log.ODM_WARNING("Cannot save options to %s: %s" % (opts_json, str(e)))
def compare_args(opts_json, args, rerun_stages):
if not os.path.isfile(opts_json):
return {}
try:
diff = {}
with open(opts_json, "r", encoding="utf-8") as f:
prev_args = json.loads(f.read())
cur_args = args_to_dict(args)
for opt in cur_args:
cur_value = cur_args[opt]
prev_value = prev_args.get(opt, None)
stage = rerun_stages.get(opt, None)
if stage is not None and cur_value != prev_value:
diff[opt] = prev_value
return diff
except:
return {}
def find_rerun_stage(opts_json, args, rerun_stages, processopts):
# Find the proper rerun stage if one is not explicitly set
if not ('rerun_is_set' in args or 'rerun_from_is_set' in args or 'rerun_all_is_set' in args):
args_diff = compare_args(opts_json, args, rerun_stages)
if args_diff:
if 'split_is_set' in args:
return processopts[processopts.index('dataset'):], args_diff
try:
stage_idxs = [processopts.index(rerun_stages[opt]) for opt in args_diff.keys() if rerun_stages[opt] is not None]
return processopts[min(stage_idxs):], args_diff
except ValueError as e:
print(str(e))
return None, {}

Wyświetl plik

@ -25,6 +25,9 @@ def get_max_memory_mb(minimum = 100, use_at_most = 0.5):
"""
return max(minimum, (virtual_memory().available / 1024 / 1024) * use_at_most)
def get_total_memory():
return virtual_memory().total
def parallel_map(func, items, max_workers=1, single_thread_fallback=True):
"""
Our own implementation for parallel processing

Wyświetl plik

@ -13,6 +13,100 @@ processopts = ['dataset', 'split', 'merge', 'opensfm', 'openmvs', 'odm_filterpoi
'odm_meshing', 'mvs_texturing', 'odm_georeferencing',
'odm_dem', 'odm_orthophoto', 'odm_report', 'odm_postprocess']
rerun_stages = {
'3d_tiles': 'odm_postprocess',
'align': 'odm_georeferencing',
'auto_boundary': 'odm_filterpoints',
'auto_boundary_distance': 'odm_filterpoints',
'bg_removal': 'dataset',
'boundary': 'odm_filterpoints',
'build_overviews': 'odm_orthophoto',
'camera_lens': 'dataset',
'cameras': 'dataset',
'cog': 'odm_dem',
'copy_to': 'odm_postprocess',
'crop': 'odm_georeferencing',
'dem_decimation': 'odm_dem',
'dem_euclidean_map': 'odm_dem',
'dem_gapfill_steps': 'odm_dem',
'dem_resolution': 'odm_dem',
'dsm': 'odm_dem',
'dtm': 'odm_dem',
'end_with': None,
'fast_orthophoto': 'odm_filterpoints',
'feature_quality': 'opensfm',
'feature_type': 'opensfm',
'force_gps': 'opensfm',
'gcp': 'dataset',
'geo': 'dataset',
'gltf': 'mvs_texturing',
'gps_accuracy': 'dataset',
'help': None,
'ignore_gsd': 'opensfm',
'matcher_neighbors': 'opensfm',
'matcher_order': 'opensfm',
'matcher_type': 'opensfm',
'max_concurrency': None,
'merge': 'Merge',
'mesh_octree_depth': 'odm_meshing',
'mesh_size': 'odm_meshing',
'min_num_features': 'opensfm',
'name': None,
'no_gpu': None,
'optimize_disk_space': None,
'orthophoto_compression': 'odm_orthophoto',
'orthophoto_cutline': 'odm_orthophoto',
'orthophoto_kmz': 'odm_orthophoto',
'orthophoto_no_tiled': 'odm_orthophoto',
'orthophoto_png': 'odm_orthophoto',
'orthophoto_resolution': 'odm_orthophoto',
'pc_classify': 'odm_georeferencing',
'pc_copc': 'odm_georeferencing',
'pc_csv': 'odm_georeferencing',
'pc_ept': 'odm_georeferencing',
'pc_filter': 'openmvs',
'pc_las': 'odm_georeferencing',
'pc_quality': 'opensfm',
'pc_rectify': 'odm_georeferencing',
'pc_sample': 'odm_filterpoints',
'pc_skip_geometric': 'openmvs',
'primary_band': 'dataset',
'project_path': None,
'radiometric_calibration': 'opensfm',
'rerun': None,
'rerun_all': None,
'rerun_from': None,
'rolling_shutter': 'opensfm',
'rolling_shutter_readout': 'opensfm',
'sfm_algorithm': 'opensfm',
'sfm_no_partial': 'opensfm',
'skip_3dmodel': 'odm_meshing',
'skip_band_alignment': 'opensfm',
'skip_orthophoto': 'odm_orthophoto',
'skip_report': 'odm_report',
'sky_removal': 'dataset',
'sm_cluster': 'split',
'sm_no_align': 'split',
'smrf_scalar': 'odm_dem',
'smrf_slope': 'odm_dem',
'smrf_threshold': 'odm_dem',
'smrf_window': 'odm_dem',
'split': 'split',
'split_image_groups': 'split',
'split_overlap': 'split',
'texturing_keep_unseen_faces': 'mvs_texturing',
'texturing_single_material': 'mvs_texturing',
'texturing_skip_global_seam_leveling': 'mvs_texturing',
'tiles': 'odm_dem',
'use_3dmesh': 'mvs_texturing',
'use_exif': 'dataset',
'use_fixed_camera_params': 'opensfm',
'use_hybrid_bundle_adjustment': 'opensfm',
'version': None,
'video_limit': 'dataset',
'video_resolution': 'dataset',
}
with open(os.path.join(context.root_path, 'VERSION')) as version_file:
__version__ = version_file.read().strip()
@ -123,8 +217,8 @@ def config(argv=None, parser=None):
parser.add_argument('--feature-type',
metavar='<string>',
action=StoreValue,
default='sift',
choices=['akaze', 'hahog', 'orb', 'sift'],
default='dspsift',
choices=['akaze', 'dspsift', 'hahog', 'orb', 'sift'],
help=('Choose the algorithm for extracting keypoints and computing descriptors. '
'Can be one of: %(choices)s. Default: '
'%(default)s'))
@ -153,6 +247,13 @@ def config(argv=None, parser=None):
default=0,
type=int,
help='Perform image matching with the nearest images based on GPS exif data. Set to 0 to match by triangulation. Default: %(default)s')
parser.add_argument('--matcher-order',
metavar='<positive integer>',
action=StoreValue,
default=0,
type=int,
help='Perform image matching with the nearest N images based on image filename order. Can speed up processing of sequential images, such as those extracted from video. It is applied only on non-georeferenced datasets. Set to 0 to disable. Default: %(default)s')
parser.add_argument('--use-fixed-camera-params',
action=StoreTrue,
@ -175,7 +276,7 @@ def config(argv=None, parser=None):
metavar='<string>',
action=StoreValue,
default='auto',
choices=['auto', 'perspective', 'brown', 'fisheye', 'spherical', 'equirectangular', 'dual'],
choices=['auto', 'perspective', 'brown', 'fisheye', 'fisheye_opencv', 'spherical', 'equirectangular', 'dual'],
help=('Set a camera projection type. Manually setting a value '
'can help improve geometric undistortion. By default the application '
'tries to determine a lens type from the images metadata. Can be one of: %(choices)s. Default: '
@ -219,6 +320,12 @@ def config(argv=None, parser=None):
'Can be one of: %(choices)s. Default: '
'%(default)s'))
parser.add_argument('--sfm-no-partial',
action=StoreTrue,
nargs=0,
default=False,
help='Do not attempt to merge partial reconstructions. This can happen when images do not have sufficient overlap or are isolated. Default: %(default)s')
parser.add_argument('--sky-removal',
action=StoreTrue,
nargs=0,
@ -259,10 +366,11 @@ def config(argv=None, parser=None):
action=StoreTrue,
nargs=0,
default=False,
help='Ignore Ground Sampling Distance (GSD). GSD '
'caps the maximum resolution of image outputs and '
'resizes images when necessary, resulting in faster processing and '
'lower memory usage. Since GSD is an estimate, sometimes ignoring it can result in slightly better image output quality. Default: %(default)s')
help='Ignore Ground Sampling Distance (GSD).'
'A memory and processor hungry change relative to the default behavior if set to true. '
'Ordinarily, GSD estimates are used to cap the maximum resolution of image outputs and resizes images when necessary, resulting in faster processing and lower memory usage. '
'Since GSD is an estimate, sometimes ignoring it can result in slightly better image output quality. '
'Never set --ignore-gsd to true unless you are positive you need it, and even then: do not use it. Default: %(default)s')
parser.add_argument('--no-gpu',
action=StoreTrue,
@ -377,7 +485,7 @@ def config(argv=None, parser=None):
metavar='<positive float>',
action=StoreValue,
type=float,
default=2.5,
default=5,
help='Filters the point cloud by removing points that deviate more than N standard deviations from the local mean. Set to 0 to disable filtering. '
'Default: %(default)s')
@ -396,13 +504,6 @@ def config(argv=None, parser=None):
help='Geometric estimates improve the accuracy of the point cloud by computing geometrically consistent depthmaps but may not be usable in larger datasets. This flag disables geometric estimates. '
'Default: %(default)s')
parser.add_argument('--pc-tile',
action=StoreTrue,
nargs=0,
default=False,
help='Reduce the memory usage needed for depthmap fusion by splitting large scenes into tiles. Turn this on if your machine doesn\'t have much RAM and/or you\'ve set --pc-quality to high or ultra. Experimental. '
'Default: %(default)s')
parser.add_argument('--smrf-scalar',
metavar='<positive float>',
action=StoreValue,
@ -441,12 +542,6 @@ def config(argv=None, parser=None):
default=False,
help=('Skip normalization of colors across all images. Useful when processing radiometric data. Default: %(default)s'))
parser.add_argument('--texturing-skip-local-seam-leveling',
action=StoreTrue,
nargs=0,
default=False,
help='Skip the blending of colors near seams. Default: %(default)s')
parser.add_argument('--texturing-keep-unseen-faces',
action=StoreTrue,
nargs=0,
@ -537,7 +632,7 @@ def config(argv=None, parser=None):
action=StoreValue,
type=float,
default=5,
help='DSM/DTM resolution in cm / pixel. Note that this value is capped to 2x the ground sampling distance (GSD) estimate. To remove the cap, check --ignore-gsd also.'
help='DSM/DTM resolution in cm / pixel. Note that this value is capped by a ground sampling distance (GSD) estimate.'
' Default: %(default)s')
parser.add_argument('--dem-decimation',
@ -564,7 +659,7 @@ def config(argv=None, parser=None):
action=StoreValue,
default=5,
type=float,
help=('Orthophoto resolution in cm / pixel. Note that this value is capped by a ground sampling distance (GSD) estimate. To remove the cap, check --ignore-gsd also. '
help=('Orthophoto resolution in cm / pixel. Note that this value is capped by a ground sampling distance (GSD) estimate.'
'Default: %(default)s'))
parser.add_argument('--orthophoto-no-tiled',
@ -743,7 +838,7 @@ def config(argv=None, parser=None):
type=float,
action=StoreValue,
metavar='<positive float>',
default=10,
default=3,
help='Set a value in meters for the GPS Dilution of Precision (DOP) '
'information for all images. If your images are tagged '
'with high precision GPS information (RTK), this value will be automatically '
@ -785,7 +880,7 @@ def config(argv=None, parser=None):
'Default: %(default)s'))
args, unknown = parser.parse_known_args(argv)
DEPRECATED = ["--verbose", "--debug", "--time", "--resize-to", "--depthmap-resolution", "--pc-geometric", "--texturing-data-term", "--texturing-outlier-removal-type", "--texturing-tone-mapping"]
DEPRECATED = ["--verbose", "--debug", "--time", "--resize-to", "--depthmap-resolution", "--pc-geometric", "--texturing-data-term", "--texturing-outlier-removal-type", "--texturing-tone-mapping", "--texturing-skip-local-seam-leveling"]
unknown_e = [p for p in unknown if p not in DEPRECATED]
if len(unknown_e) > 0:
raise parser.error("unrecognized arguments: %s" % " ".join(unknown_e))

Wyświetl plik

@ -5,22 +5,17 @@ import numpy
import math
import time
import shutil
import functools
import glob
import re
from joblib import delayed, Parallel
from opendm.system import run
from opendm import point_cloud
from opendm import io
from opendm import system
from opendm.concurrency import get_max_memory, parallel_map
from scipy import ndimage
from opendm.concurrency import get_max_memory, parallel_map, get_total_memory
from datetime import datetime
from opendm.vendor.gdal_fillnodata import main as gdal_fillnodata
from opendm import log
try:
import Queue as queue
except:
import queue
import threading
from .ground_rectification.rectify import run_rectification
from . import pdal
@ -68,114 +63,51 @@ error = None
def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56'], gapfill=True,
outdir='', resolution=0.1, max_workers=1, max_tile_size=4096,
decimation=None, keep_unfilled_copy=False,
apply_smoothing=True):
decimation=None, with_euclidean_map=False,
apply_smoothing=True, max_tiles=None):
""" Create DEM from multiple radii, and optionally gapfill """
global error
error = None
start = datetime.now()
if not os.path.exists(outdir):
log.ODM_INFO("Creating %s" % outdir)
os.mkdir(outdir)
extent = point_cloud.get_extent(input_point_cloud)
log.ODM_INFO("Point cloud bounds are [minx: %s, maxx: %s] [miny: %s, maxy: %s]" % (extent['minx'], extent['maxx'], extent['miny'], extent['maxy']))
ext_width = extent['maxx'] - extent['minx']
ext_height = extent['maxy'] - extent['miny']
w, h = (int(math.ceil(ext_width / float(resolution))),
int(math.ceil(ext_height / float(resolution))))
# Set a floor, no matter the resolution parameter
# (sometimes a wrongly estimated scale of the model can cause the resolution
# to be set unrealistically low, causing errors)
RES_FLOOR = 64
if w < RES_FLOOR and h < RES_FLOOR:
prev_w, prev_h = w, h
if w >= h:
w, h = (RES_FLOOR, int(math.ceil(ext_height / ext_width * RES_FLOOR)))
else:
w, h = (int(math.ceil(ext_width / ext_height * RES_FLOOR)), RES_FLOOR)
floor_ratio = prev_w / float(w)
resolution *= floor_ratio
radiuses = [str(float(r) * floor_ratio) for r in radiuses]
log.ODM_WARNING("Really low resolution DEM requested %s will set floor at %s pixels. Resolution changed to %s. The scale of this reconstruction might be off." % ((prev_w, prev_h), RES_FLOOR, resolution))
final_dem_pixels = w * h
num_splits = int(max(1, math.ceil(math.log(math.ceil(final_dem_pixels / float(max_tile_size * max_tile_size)))/math.log(2))))
num_tiles = num_splits * num_splits
log.ODM_INFO("DEM resolution is %s, max tile size is %s, will split DEM generation into %s tiles" % ((h, w), max_tile_size, num_tiles))
tile_bounds_width = ext_width / float(num_splits)
tile_bounds_height = ext_height / float(num_splits)
tiles = []
for r in radiuses:
minx = extent['minx']
for x in range(num_splits):
miny = extent['miny']
if x == num_splits - 1:
maxx = extent['maxx']
else:
maxx = minx + tile_bounds_width
for y in range(num_splits):
if y == num_splits - 1:
maxy = extent['maxy']
else:
maxy = miny + tile_bounds_height
filename = os.path.join(os.path.abspath(outdir), '%s_r%s_x%s_y%s.tif' % (dem_type, r, x, y))
tiles.append({
'radius': r,
'bounds': {
'minx': minx,
'maxx': maxx,
'miny': miny,
'maxy': maxy
},
'filename': filename
})
miny = maxy
minx = maxx
# Sort tiles by increasing radius
tiles.sort(key=lambda t: float(t['radius']), reverse=True)
def process_tile(q):
log.ODM_INFO("Generating %s (%s, radius: %s, resolution: %s)" % (q['filename'], output_type, q['radius'], resolution))
d = pdal.json_gdal_base(q['filename'], output_type, q['radius'], resolution, q['bounds'])
if dem_type == 'dtm':
d = pdal.json_add_classification_filter(d, 2)
if decimation is not None:
d = pdal.json_add_decimation_filter(d, decimation)
pdal.json_add_readers(d, [input_point_cloud])
pdal.run_pipeline(d)
parallel_map(process_tile, tiles, max_workers)
kwargs = {
'input': input_point_cloud,
'outdir': outdir,
'outputType': output_type,
'radiuses': ",".join(map(str, radiuses)),
'resolution': resolution,
'maxTiles': 0 if max_tiles is None else max_tiles,
'decimation': 1 if decimation is None else decimation,
'classification': 2 if dem_type == 'dtm' else -1,
'tileSize': max_tile_size
}
system.run('renderdem "{input}" '
'--outdir "{outdir}" '
'--output-type {outputType} '
'--radiuses {radiuses} '
'--resolution {resolution} '
'--max-tiles {maxTiles} '
'--decimation {decimation} '
'--classification {classification} '
'--tile-size {tileSize} '
'--force '.format(**kwargs), env_vars={'OMP_NUM_THREADS': max_workers})
output_file = "%s.tif" % dem_type
output_path = os.path.abspath(os.path.join(outdir, output_file))
# Verify tile results
for t in tiles:
if not os.path.exists(t['filename']):
raise Exception("Error creating %s, %s failed to be created" % (output_file, t['filename']))
# Fetch tiles
tiles = []
for p in glob.glob(os.path.join(os.path.abspath(outdir), "*.tif")):
filename = os.path.basename(p)
m = re.match("^r([\d\.]+)_x\d+_y\d+\.tif", filename)
if m is not None:
tiles.append({'filename': p, 'radius': float(m.group(1))})
if len(tiles) == 0:
raise system.ExitException("No DEM tiles were generated, something went wrong")
log.ODM_INFO("Generated %s tiles" % len(tiles))
# Sort tiles by decreasing radius
tiles.sort(key=lambda t: float(t['radius']), reverse=True)
# Create virtual raster
tiles_vrt_path = os.path.abspath(os.path.join(outdir, "tiles.vrt"))
@ -187,7 +119,6 @@ def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56']
run('gdalbuildvrt -input_file_list "%s" "%s" ' % (tiles_file_list, tiles_vrt_path))
merged_vrt_path = os.path.abspath(os.path.join(outdir, "merged.vrt"))
geotiff_tmp_path = os.path.abspath(os.path.join(outdir, 'tiles.tmp.tif'))
geotiff_small_path = os.path.abspath(os.path.join(outdir, 'tiles.small.tif'))
geotiff_small_filled_path = os.path.abspath(os.path.join(outdir, 'tiles.small_filled.tif'))
geotiff_path = os.path.abspath(os.path.join(outdir, 'tiles.tif'))
@ -199,7 +130,6 @@ def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56']
'tiles_vrt': tiles_vrt_path,
'merged_vrt': merged_vrt_path,
'geotiff': geotiff_path,
'geotiff_tmp': geotiff_tmp_path,
'geotiff_small': geotiff_small_path,
'geotiff_small_filled': geotiff_small_filled_path
}
@ -208,31 +138,27 @@ def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56']
# Sometimes, for some reason gdal_fillnodata.py
# behaves strangely when reading data directly from a .VRT
# so we need to convert to GeoTIFF first.
# Scale to 10% size
run('gdal_translate '
'-co NUM_THREADS={threads} '
'-co BIGTIFF=IF_SAFER '
'-co COMPRESS=DEFLATE '
'--config GDAL_CACHEMAX {max_memory}% '
'"{tiles_vrt}" "{geotiff_tmp}"'.format(**kwargs))
# Scale to 10% size
run('gdal_translate '
'-co NUM_THREADS={threads} '
'-co BIGTIFF=IF_SAFER '
'--config GDAL_CACHEMAX {max_memory}% '
'-outsize 10% 0 '
'"{geotiff_tmp}" "{geotiff_small}"'.format(**kwargs))
'-outsize 10% 0 '
'"{tiles_vrt}" "{geotiff_small}"'.format(**kwargs))
# Fill scaled
gdal_fillnodata(['.',
'-co', 'NUM_THREADS=%s' % kwargs['threads'],
'-co', 'BIGTIFF=IF_SAFER',
'-co', 'COMPRESS=DEFLATE',
'--config', 'GDAL_CACHE_MAX', str(kwargs['max_memory']) + '%',
'-b', '1',
'-of', 'GTiff',
kwargs['geotiff_small'], kwargs['geotiff_small_filled']])
# Merge filled scaled DEM with unfilled DEM using bilinear interpolation
run('gdalbuildvrt -resolution highest -r bilinear "%s" "%s" "%s"' % (merged_vrt_path, geotiff_small_filled_path, geotiff_tmp_path))
run('gdalbuildvrt -resolution highest -r bilinear "%s" "%s" "%s"' % (merged_vrt_path, geotiff_small_filled_path, tiles_vrt_path))
run('gdal_translate '
'-co NUM_THREADS={threads} '
'-co TILED=YES '
@ -255,14 +181,14 @@ def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56']
else:
os.replace(geotiff_path, output_path)
if os.path.exists(geotiff_tmp_path):
if not keep_unfilled_copy:
os.remove(geotiff_tmp_path)
else:
os.replace(geotiff_tmp_path, io.related_file_path(output_path, postfix=".unfilled"))
if os.path.exists(tiles_vrt_path):
if with_euclidean_map:
emap_path = io.related_file_path(output_path, postfix=".euclideand")
compute_euclidean_map(tiles_vrt_path, emap_path, overwrite=True)
for cleanup_file in [tiles_vrt_path, tiles_file_list, merged_vrt_path, geotiff_small_path, geotiff_small_filled_path]:
if os.path.exists(cleanup_file): os.remove(cleanup_file)
for t in tiles:
if os.path.exists(t['filename']): os.remove(t['filename'])
@ -278,12 +204,20 @@ def compute_euclidean_map(geotiff_path, output_path, overwrite=False):
with rasterio.open(geotiff_path) as f:
nodata = f.nodatavals[0]
if not os.path.exists(output_path) or overwrite:
if not os.path.isfile(output_path) or overwrite:
if os.path.isfile(output_path):
os.remove(output_path)
log.ODM_INFO("Computing euclidean distance: %s" % output_path)
if gdal_proximity is not None:
try:
gdal_proximity(['gdal_proximity.py', geotiff_path, output_path, '-values', str(nodata)])
gdal_proximity(['gdal_proximity.py',
geotiff_path, output_path, '-values', str(nodata),
'-co', 'TILED=YES',
'-co', 'BIGTIFF=IF_SAFER',
'-co', 'COMPRESS=DEFLATE',
])
except Exception as e:
log.ODM_WARNING("Cannot compute euclidean distance: %s" % str(e))
@ -299,68 +233,31 @@ def compute_euclidean_map(geotiff_path, output_path, overwrite=False):
return output_path
def median_smoothing(geotiff_path, output_path, smoothing_iterations=1, window_size=512, num_workers=1):
def median_smoothing(geotiff_path, output_path, window_size=512, num_workers=1, radius=4):
""" Apply median smoothing """
start = datetime.now()
if not os.path.exists(geotiff_path):
raise Exception('File %s does not exist!' % geotiff_path)
log.ODM_INFO('Starting smoothing...')
with rasterio.open(geotiff_path) as img:
nodata = img.nodatavals[0]
dtype = img.dtypes[0]
shape = img.shape
arr = img.read()[0]
for i in range(smoothing_iterations):
log.ODM_INFO("Smoothing iteration %s" % str(i + 1))
rows, cols = numpy.meshgrid(numpy.arange(0, shape[0], window_size), numpy.arange(0, shape[1], window_size))
rows = rows.flatten()
cols = cols.flatten()
rows_end = numpy.minimum(rows + window_size, shape[0])
cols_end= numpy.minimum(cols + window_size, shape[1])
windows = numpy.dstack((rows, cols, rows_end, cols_end)).reshape(-1, 4)
filter = functools.partial(ndimage.median_filter, size=9, output=dtype, mode='nearest')
# threading backend and GIL released filter are important for memory efficiency and multi-core performance
window_arrays = Parallel(n_jobs=num_workers, backend='threading')(delayed(window_filter_2d)(arr, nodata , window, 9, filter) for window in windows)
for window, win_arr in zip(windows, window_arrays):
arr[window[0]:window[2], window[1]:window[3]] = win_arr
log.ODM_INFO("Smoothing completed in %s" % str(datetime.now() - start))
# write output
with rasterio.open(output_path, 'w', BIGTIFF="IF_SAFER", **img.profile) as imgout:
imgout.write(arr, 1)
kwargs = {
'input': geotiff_path,
'output': output_path,
'window': window_size,
'radius': radius,
}
system.run('fastrasterfilter "{input}" '
'--output "{output}" '
'--window-size {window} '
'--radius {radius} '
'--co TILED=YES '
'--co BIGTIFF=IF_SAFER '
'--co COMPRESS=DEFLATE '.format(**kwargs), env_vars={'OMP_NUM_THREADS': num_workers})
log.ODM_INFO('Completed smoothing to create %s in %s' % (output_path, datetime.now() - start))
return output_path
def window_filter_2d(arr, nodata, window, kernel_size, filter):
"""
Apply a filter to dem within a window, expects to work with kernal based filters
:param geotiff_path: path to the geotiff to filter
:param window: the window to apply the filter, should be a list contains row start, col_start, row_end, col_end
:param kernel_size: the size of the kernel for the filter, works with odd numbers, need to test if it works with even numbers
:param filter: the filter function which takes a 2d array as input and filter results as output.
"""
shape = arr.shape[:2]
if window[0] < 0 or window[1] < 0 or window[2] > shape[0] or window[3] > shape[1]:
raise Exception('Window is out of bounds')
expanded_window = [ max(0, window[0] - kernel_size // 2), max(0, window[1] - kernel_size // 2), min(shape[0], window[2] + kernel_size // 2), min(shape[1], window[3] + kernel_size // 2) ]
win_arr = arr[expanded_window[0]:expanded_window[2], expanded_window[1]:expanded_window[3]]
# Should have a better way to handle nodata, similar to the way the filter algorithms handle the border (reflection, nearest, interpolation, etc).
# For now will follow the old approach to guarantee identical outputs
nodata_locs = win_arr == nodata
win_arr = filter(win_arr)
win_arr[nodata_locs] = nodata
win_arr = win_arr[window[0] - expanded_window[0] : window[2] - expanded_window[0], window[1] - expanded_window[1] : window[3] - expanded_window[1]]
return win_arr
def get_dem_radius_steps(stats_file, steps, resolution, multiplier = 1.0):
radius_steps = [point_cloud.get_spacing(stats_file, resolution) * multiplier]
for _ in range(steps - 1):

Wyświetl plik

@ -76,7 +76,7 @@ def write_cloud(metadata, point_cloud, output_point_cloud_path):
{
"type": "writers.las",
"filename": output_point_cloud_path,
"compression": "laszip",
"compression": "lazperf",
"extra_dims": "all"
}
]

Wyświetl plik

@ -62,17 +62,48 @@ def build_untwine(input_point_cloud_files, tmpdir, output_path, max_concurrency=
# Run untwine
system.run('untwine --temp_dir "{tmpdir}" {files} --output_dir "{outputdir}"'.format(**kwargs))
def build_copc(input_point_cloud_files, output_file):
def build_copc(input_point_cloud_files, output_file, convert_rgb_8_to_16=False):
if len(input_point_cloud_files) == 0:
logger.ODM_WARNING("Cannot build COPC, no input files")
return
base_path, ext = os.path.splitext(output_file)
tmpdir = io.related_file_path(base_path, postfix="-tmp")
if os.path.exists(tmpdir):
log.ODM_WARNING("Removing previous directory %s" % tmpdir)
shutil.rmtree(tmpdir)
cleanup = [tmpdir]
if convert_rgb_8_to_16:
tmpdir16 = io.related_file_path(base_path, postfix="-tmp16")
if os.path.exists(tmpdir16):
log.ODM_WARNING("Removing previous directory %s" % tmpdir16)
shutil.rmtree(tmpdir16)
os.makedirs(tmpdir16, exist_ok=True)
cleanup.append(tmpdir16)
converted = []
ok = True
for f in input_point_cloud_files:
# Convert 8bit RGB to 16bit RGB (per COPC spec)
base = os.path.basename(f)
filename, ext = os.path.splitext(base)
out_16 = os.path.join(tmpdir16, "%s_16%s" % (filename, ext))
try:
system.run('pdal translate -i "{input}" -o "{output}" assign '
'--filters.assign.value="Red = Red / 255 * 65535" '
'--filters.assign.value="Green = Green / 255 * 65535" '
'--filters.assign.value="Blue = Blue / 255 * 65535" '.format(input=f, output=out_16))
converted.append(out_16)
except Exception as e:
log.ODM_WARNING("Cannot convert point cloud to 16bit RGB, COPC is not going to follow the official spec: %s" % str(e))
ok = False
break
if ok:
input_point_cloud_files = converted
kwargs = {
'tmpdir': tmpdir,
'files': "--files " + " ".join(map(double_quote, input_point_cloud_files)),
@ -82,5 +113,6 @@ def build_copc(input_point_cloud_files, output_file):
# Run untwine
system.run('untwine --temp_dir "{tmpdir}" {files} -o "{output}" --single_file'.format(**kwargs))
if os.path.exists(tmpdir):
shutil.rmtree(tmpdir)
for d in cleanup:
if os.path.exists(d):
shutil.rmtree(d)

94
opendm/exiftool.py 100644
Wyświetl plik

@ -0,0 +1,94 @@
import json
import os
import tempfile
import base64
from rasterio.io import MemoryFile
from opendm.system import run
from opendm import log
from opendm.utils import double_quote
def extract_raw_thermal_image_data(image_path):
try:
f, tmp_file_path = tempfile.mkstemp(suffix='.json')
os.close(f)
try:
output = run("exiftool -b -x ThumbnailImage -x PreviewImage -j \"%s\" > \"%s\"" % (image_path, tmp_file_path), quiet=True)
with open(tmp_file_path) as f:
j = json.loads(f.read())
if isinstance(j, list):
j = j[0] # single file
if "RawThermalImage" in j:
imageBytes = base64.b64decode(j["RawThermalImage"][len("base64:"):])
with MemoryFile(imageBytes) as memfile:
with memfile.open() as dataset:
img = dataset.read()
bands, h, w = img.shape
if bands != 1:
raise Exception("Raw thermal image has more than one band? This is not supported")
# (1, 512, 640) --> (512, 640, 1)
img = img[0][:,:,None]
del j["RawThermalImage"]
return extract_temperature_params_from(j), img
else:
raise Exception("Invalid JSON (not a list)")
except Exception as e:
log.ODM_WARNING("Cannot extract tags using exiftool: %s" % str(e))
return {}, None
finally:
if os.path.isfile(tmp_file_path):
os.remove(tmp_file_path)
except Exception as e:
log.ODM_WARNING("Cannot create temporary file: %s" % str(e))
return {}, None
def unit(unit):
def _convert(v):
if isinstance(v, float):
return v
elif isinstance(v, str):
if not v[-1].isnumeric():
if v[-1].upper() != unit.upper():
log.ODM_WARNING("Assuming %s is in %s" % (v, unit))
return float(v[:-1])
else:
return float(v)
else:
return float(v)
return _convert
def extract_temperature_params_from(tags):
# Defaults
meta = {
"Emissivity": float,
"ObjectDistance": unit("m"),
"AtmosphericTemperature": unit("C"),
"ReflectedApparentTemperature": unit("C"),
"IRWindowTemperature": unit("C"),
"IRWindowTransmission": float,
"RelativeHumidity": unit("%"),
"PlanckR1": float,
"PlanckB": float,
"PlanckF": float,
"PlanckO": float,
"PlanckR2": float,
}
params = {}
for m in meta:
if m not in tags:
# All or nothing
raise Exception("Cannot find %s in tags" % m)
params[m] = (meta[m])(tags[m])
return params

Wyświetl plik

@ -12,6 +12,9 @@ class GeoFile:
with open(self.geo_path, 'r') as f:
contents = f.read().strip()
# Strip eventual BOM characters
contents = contents.replace('\ufeff', '')
lines = list(map(str.strip, contents.split('\n')))
if lines:

Wyświetl plik

@ -279,9 +279,10 @@ def obj2glb(input_obj, output_glb, rtc=(None, None), draco_compression=True, _in
)
gltf.extensionsRequired = ['KHR_materials_unlit']
gltf.extensionsUsed = ['KHR_materials_unlit']
if rtc != (None, None) and len(rtc) >= 2:
gltf.extensionsUsed = ['CESIUM_RTC', 'KHR_materials_unlit']
gltf.extensionsUsed.append('CESIUM_RTC')
gltf.extensions = {
'CESIUM_RTC': {
'center': [float(rtc[0]), float(rtc[1]), 0.0]

Wyświetl plik

@ -151,6 +151,8 @@ def parse_srs_header(header):
' - EPSG:*****\n'
' - WGS84 UTM **(N|S)\n'
' - Any valid proj4 string (for example, +proj=utm +zone=32 +north +ellps=WGS84 +datum=WGS84 +units=m +no_defs)\n\n'
' Some valid EPSG codes are not yet available in OpenDroneMap and need substituted with valid proj4 strings\n'
' Try searching for equivalent proj4 strings at spatialreference.org or epsg.io.\n'
'Modify your input and try again.' % header)
raise RuntimeError(e)
@ -165,4 +167,4 @@ def utm_transformers_from_ll(lon, lat):
target_srs = utm_srs_from_ll(lon, lat)
ll_to_utm = transformer(source_srs, target_srs)
utm_to_ll = transformer(target_srs, source_srs)
return ll_to_utm, utm_to_ll
return ll_to_utm, utm_to_ll

Wyświetl plik

@ -7,11 +7,11 @@ import dateutil.parser
import shutil
import multiprocessing
from opendm.loghelpers import double_quote, args_to_dict
from opendm.arghelpers import double_quote, args_to_dict
from vmem import virtual_memory
if sys.platform == 'win32':
# No colors on Windows, sorry!
if sys.platform == 'win32' or os.getenv('no_ansiesc'):
# No colors on Windows (sorry !) or existing no_ansiesc env variable
HEADER = ''
OKBLUE = ''
OKGREEN = ''

Wyświetl plik

@ -1,28 +0,0 @@
from shlex import _find_unsafe
def double_quote(s):
"""Return a shell-escaped version of the string *s*."""
if not s:
return '""'
if _find_unsafe(s) is None:
return s
# use double quotes, and prefix double quotes with a \
# the string $"b is then quoted as "$\"b"
return '"' + s.replace('"', '\\\"') + '"'
def args_to_dict(args):
args_dict = vars(args)
result = {}
for k in sorted(args_dict.keys()):
# Skip _is_set keys
if k.endswith("_is_set"):
continue
# Don't leak token
if k == 'sm_cluster' and args_dict[k] is not None:
result[k] = True
else:
result[k] = args_dict[k]
return result

Wyświetl plik

@ -9,7 +9,7 @@ from opendm import point_cloud
from scipy import signal
import numpy as np
def create_25dmesh(inPointCloud, outMesh, radius_steps=["0.05"], dsm_resolution=0.05, depth=8, samples=1, maxVertexCount=100000, available_cores=None, method='gridded', smooth_dsm=True):
def create_25dmesh(inPointCloud, outMesh, radius_steps=["0.05"], dsm_resolution=0.05, depth=8, samples=1, maxVertexCount=100000, available_cores=None, method='gridded', smooth_dsm=True, max_tiles=None):
# Create DSM from point cloud
# Create temporary directory
@ -31,7 +31,8 @@ def create_25dmesh(inPointCloud, outMesh, radius_steps=["0.05"], dsm_resolution=
outdir=tmp_directory,
resolution=dsm_resolution,
max_workers=available_cores,
apply_smoothing=smooth_dsm
apply_smoothing=smooth_dsm,
max_tiles=max_tiles
)
if method == 'gridded':
@ -122,6 +123,7 @@ def dem_to_mesh_gridded(inGeotiff, outMesh, maxVertexCount, maxConcurrency=1):
system.run('"{reconstructmesh}" -i "{infile}" '
'-o "{outfile}" '
'--archive-type 3 '
'--remove-spikes 0 --remove-spurious 0 --smooth 0 '
'--target-face-num {max_faces} -v 0'.format(**cleanupArgs))
@ -185,7 +187,7 @@ def screened_poisson_reconstruction(inPointCloud, outMesh, depth = 8, samples =
if threads < 1:
break
else:
log.ODM_WARNING("PoissonRecon failed with %s threads, let's retry with %s..." % (threads, threads // 2))
log.ODM_WARNING("PoissonRecon failed with %s threads, let's retry with %s..." % (threads * 2, threads))
# Cleanup and reduce vertex count if necessary
@ -198,6 +200,7 @@ def screened_poisson_reconstruction(inPointCloud, outMesh, depth = 8, samples =
system.run('"{reconstructmesh}" -i "{infile}" '
'-o "{outfile}" '
'--archive-type 3 '
'--remove-spikes 0 --remove-spurious 20 --smooth 0 '
'--target-face-num {max_faces} -v 0'.format(**cleanupArgs))

Wyświetl plik

@ -181,8 +181,13 @@ def get_primary_band_name(multi_camera, user_band_name):
if len(multi_camera) < 1:
raise Exception("Invalid multi_camera list")
# multi_camera is already sorted by band_index
# Pick RGB, or Green, or Blue, in this order, if available, otherwise first band
if user_band_name == "auto":
for aliases in [['rgb', 'redgreenblue'], ['green', 'g'], ['blue', 'b']]:
for band in multi_camera:
if band['name'].lower() in aliases:
return band['name']
return multi_camera[0]['name']
for band in multi_camera:
@ -504,6 +509,28 @@ def find_features_homography(image_gray, align_image_gray, feature_retention=0.7
# Detect SIFT features and compute descriptors.
detector = cv2.SIFT_create(edgeThreshold=10, contrastThreshold=0.1)
h,w = image_gray.shape
max_dim = max(h, w)
max_size = 2048
if max_dim > max_size:
if max_dim == w:
f = max_size / w
else:
f = max_size / h
image_gray = cv2.resize(image_gray, None, fx=f, fy=f, interpolation=cv2.INTER_AREA)
h,w = image_gray.shape
if align_image_gray.shape[0] != image_gray.shape[0]:
fx = image_gray.shape[1]/align_image_gray.shape[1]
fy = image_gray.shape[0]/align_image_gray.shape[0]
align_image_gray = cv2.resize(align_image_gray, None,
fx=fx,
fy=fy,
interpolation=(cv2.INTER_AREA if (fx < 1.0 and fy < 1.0) else cv2.INTER_LANCZOS4))
kp_image, desc_image = detector.detectAndCompute(image_gray, None)
kp_align_image, desc_align_image = detector.detectAndCompute(align_image_gray, None)

Wyświetl plik

@ -85,7 +85,7 @@ def generate_kmz(orthophoto_file, output_file=None, outsize=None):
system.run('gdal_translate -of KMLSUPEROVERLAY -co FORMAT=PNG "%s" "%s" %s '
'--config GDAL_CACHEMAX %s%% ' % (orthophoto_file, output_file, bandparam, get_max_memory()))
def post_orthophoto_steps(args, bounds_file_path, orthophoto_file, orthophoto_tiles_dir):
def post_orthophoto_steps(args, bounds_file_path, orthophoto_file, orthophoto_tiles_dir, resolution):
if args.crop > 0 or args.boundary:
Cropper.crop(bounds_file_path, orthophoto_file, get_orthophoto_vars(args), keep_original=not args.optimize_disk_space, warp_options=['-dstalpha'])
@ -99,7 +99,7 @@ def post_orthophoto_steps(args, bounds_file_path, orthophoto_file, orthophoto_ti
generate_kmz(orthophoto_file)
if args.tiles:
generate_orthophoto_tiles(orthophoto_file, orthophoto_tiles_dir, args.max_concurrency)
generate_orthophoto_tiles(orthophoto_file, orthophoto_tiles_dir, args.max_concurrency, resolution)
if args.cog:
convert_to_cogeo(orthophoto_file, max_workers=args.max_concurrency, compression=args.orthophoto_compression)

Wyświetl plik

@ -13,7 +13,7 @@ from opendm import system
from opendm import context
from opendm import camera
from opendm import location
from opendm.photo import find_largest_photo_dim, find_largest_photo
from opendm.photo import find_largest_photo_dims, find_largest_photo
from opensfm.large import metadataset
from opensfm.large import tools
from opensfm.actions import undistort
@ -64,7 +64,6 @@ class OSFMContext:
"Check that the images have enough overlap, "
"that there are enough recognizable features "
"and that the images are in focus. "
"You could also try to increase the --min-num-features parameter."
"The program will now exit.")
if rolling_shutter_correct:
@ -211,11 +210,25 @@ class OSFMContext:
'lowest': 0.0675,
}
max_dim = find_largest_photo_dim(photos)
max_dims = find_largest_photo_dims(photos)
if max_dim > 0:
if max_dims is not None:
w, h = max_dims
max_dim = max(w, h)
log.ODM_INFO("Maximum photo dimensions: %spx" % str(max_dim))
feature_process_size = int(max_dim * feature_quality_scale[args.feature_quality])
lower_limit = 320
upper_limit = 4480
megapixels = (w * h) / 1e6
multiplier = 1
if megapixels < 2:
multiplier = 2
elif megapixels > 42:
multiplier = 0.5
factor = min(1, feature_quality_scale[args.feature_quality] * multiplier)
feature_process_size = min(upper_limit, max(lower_limit, int(max_dim * factor)))
log.ODM_INFO("Photo dimensions for feature extraction: %ipx" % feature_process_size)
else:
log.ODM_WARNING("Cannot compute max image dimensions, going with defaults")
@ -227,6 +240,11 @@ class OSFMContext:
else:
matcher_graph_rounds = 50
matcher_neighbors = 0
# Always use matcher-neighbors if less than 4 pictures
if len(photos) <= 3:
matcher_graph_rounds = 0
matcher_neighbors = 3
config = [
"use_exif_size: no",
@ -246,6 +264,12 @@ class OSFMContext:
"triangulation_type: ROBUST",
"retriangulation_ratio: 2",
]
if args.matcher_order > 0:
if not reconstruction.is_georeferenced():
config.append("matching_order_neighbors: %s" % args.matcher_order)
else:
log.ODM_WARNING("Georeferenced reconstruction, ignoring --matcher-order")
if args.camera_lens != 'auto':
config.append("camera_projection_type: %s" % args.camera_lens.upper())
@ -272,9 +296,8 @@ class OSFMContext:
config.append("matcher_type: %s" % osfm_matchers[matcher_type])
# GPU acceleration?
if has_gpu(args):
max_photo = find_largest_photo(photos)
w, h = max_photo.width, max_photo.height
if has_gpu(args) and max_dims is not None:
w, h = max_dims
if w > h:
h = int((h / w) * feature_process_size)
w = int(feature_process_size)
@ -548,6 +571,8 @@ class OSFMContext:
pdf_report.save_report("report.pdf")
if os.path.exists(osfm_report_path):
if os.path.exists(report_path):
os.unlink(report_path)
shutil.move(osfm_report_path, report_path)
else:
log.ODM_WARNING("Report could not be generated")
@ -768,3 +793,12 @@ def get_all_submodel_paths(submodels_path, *all_paths):
result.append([os.path.join(submodels_path, f, ap) for ap in all_paths])
return result
def is_submodel(opensfm_root):
# A bit hackish, but works without introducing additional markers / flags
# Look at the path of the opensfm directory and see if "submodel_" is part of it
parts = os.path.abspath(opensfm_root).split(os.path.sep)
return (len(parts) >= 2 and parts[-2][:9] == "submodel_") or \
os.path.isfile(os.path.join(opensfm_root, "split_merge_stop_at_reconstruction.txt")) or \
os.path.isfile(os.path.join(opensfm_root, "features", "empty"))

Wyświetl plik

@ -19,7 +19,7 @@ from xml.parsers.expat import ExpatError
from opensfm.sensors import sensor_data
from opensfm.geo import ecef_from_lla
projections = ['perspective', 'fisheye', 'brown', 'dual', 'equirectangular', 'spherical']
projections = ['perspective', 'fisheye', 'fisheye_opencv', 'brown', 'dual', 'equirectangular', 'spherical']
def find_largest_photo_dims(photos):
max_mp = 0
@ -305,7 +305,7 @@ class ODM_Photo:
for xtags in xmp:
try:
band_name = self.get_xmp_tag(xtags, ['Camera:BandName', '@Camera:BandName'])
band_name = self.get_xmp_tag(xtags, ['Camera:BandName', '@Camera:BandName', 'FLIR:BandName'])
if band_name is not None:
self.band_name = band_name.replace(" ", "")
@ -428,6 +428,12 @@ class ODM_Photo:
camera_projection = self.get_xmp_tag(xtags, ['@Camera:ModelType', 'Camera:ModelType'])
if camera_projection is not None:
camera_projection = camera_projection.lower()
# Parrot Sequoia's "fisheye" model maps to "fisheye_opencv"
# or better yet, replace all fisheye with fisheye_opencv, but wait to change API signature
if camera_projection == "fisheye":
camera_projection = "fisheye_opencv"
if camera_projection in projections:
self.camera_projection = camera_projection
@ -612,9 +618,11 @@ class ODM_Photo:
else:
result.append(None)
return result
else:
elif hasattr(tag.values, 'den'):
return [float(tag.values.num) / float(tag.values.den) if tag.values.den != 0 else None]
else:
return [None]
def float_value(self, tag):
v = self.float_values(tag)
if len(v) > 0:
@ -623,6 +631,8 @@ class ODM_Photo:
def int_values(self, tag):
if isinstance(tag.values, list):
return [int(v) for v in tag.values]
elif isinstance(tag.values, str) and tag.values == '':
return []
else:
return [int(tag.values)]
@ -916,3 +926,6 @@ class ODM_Photo:
return self.width * self.height / 1e6
else:
return 0.0
def is_make_model(self, make, model):
return self.camera_make.lower() == make.lower() and self.camera_model.lower() == model.lower()

Wyświetl plik

@ -9,6 +9,8 @@ from opendm.concurrency import parallel_map
from opendm.utils import double_quote
from opendm.boundary import as_polygon, as_geojson
from opendm.dem.pdal import run_pipeline
from opendm.opc import classify
from opendm.dem import commands
def ply_info(input_ply):
if not os.path.exists(input_ply):
@ -274,6 +276,32 @@ def merge_ply(input_point_cloud_files, output_file, dims=None):
system.run(' '.join(cmd))
def post_point_cloud_steps(args, tree, rerun=False):
# Classify and rectify before generating derivate files
if args.pc_classify:
pc_classify_marker = os.path.join(tree.odm_georeferencing, 'pc_classify_done.txt')
if not io.file_exists(pc_classify_marker) or rerun:
log.ODM_INFO("Classifying {} using Simple Morphological Filter (1/2)".format(tree.odm_georeferencing_model_laz))
commands.classify(tree.odm_georeferencing_model_laz,
args.smrf_scalar,
args.smrf_slope,
args.smrf_threshold,
args.smrf_window
)
log.ODM_INFO("Classifying {} using OpenPointClass (2/2)".format(tree.odm_georeferencing_model_laz))
classify(tree.odm_georeferencing_model_laz, args.max_concurrency)
with open(pc_classify_marker, 'w') as f:
f.write('Classify: smrf\n')
f.write('Scalar: {}\n'.format(args.smrf_scalar))
f.write('Slope: {}\n'.format(args.smrf_slope))
f.write('Threshold: {}\n'.format(args.smrf_threshold))
f.write('Window: {}\n'.format(args.smrf_window))
if args.pc_rectify:
commands.rectify(tree.odm_georeferencing_model_laz)
# XYZ point cloud output
if args.pc_csv:
log.ODM_INFO("Creating CSV file (XYZ format)")
@ -311,4 +339,4 @@ def post_point_cloud_steps(args, tree, rerun=False):
log.ODM_INFO("Creating Cloud Optimized Point Cloud (COPC)")
copc_output = io.related_file_path(tree.odm_georeferencing_model_laz, postfix=".copc")
entwine.build_copc([tree.odm_georeferencing_model_laz], copc_output)
entwine.build_copc([tree.odm_georeferencing_model_laz], copc_output, convert_rgb_8_to_16=True)

Wyświetl plik

@ -2,6 +2,7 @@ from opendm import log
# Make Model (lowercase) --> readout time (ms)
RS_DATABASE = {
'autel robotics xt701': 25, # Autel Evo II 8k
'dji phantom vision fc200': 74, # Phantom 2
'dji fc300s': 33, # Phantom 3 Advanced
@ -11,18 +12,22 @@ RS_DATABASE = {
'dji fc330': 33, # Phantom 4
'dji fc6310': 33, # Phantom 4 Professional
'dji fc7203': 20, # Mavic Mini v1
'dji fc7203': lambda p: 19 if p.get_capture_megapixels() < 10 else 25, # DJI Mavic Mini v1 (at 16:9 => 9MP 19ms, at 4:3 => 12MP 25ms)
'dji fc2103': 32, # DJI Mavic Air 1
'dji fc3170': 27, # DJI Mavic Air 2
'dji fc3411': 32, # DJI Mavic Air 2S
'dji fc220': 64, # DJI Mavic Pro (Platinum)
'hasselblad l1d-20c': lambda p: 47 if p.get_capture_megapixels() < 17 else 56, # DJI Mavic 2 Pro (at 16:10 => 16.8MP 47ms, at 3:2 => 19.9MP 56ms. 4:3 has 17.7MP with same image height as 3:2 which can be concluded as same sensor readout)
'hasselblad l2d-20c': 16.6, # DJI Mavic 3 (not enterprise version)
'dji fc3582': lambda p: 26 if p.get_capture_megapixels() < 48 else 60, # DJI Mini 3 pro (at 48MP readout is 60ms, at 12MP it's 26ms)
'dji fc350': 30, # Inspire 1
'dji mavic2-enterprise-advanced': 31, # DJI Mavic 2 Enterprise Advanced
'dji zenmuse z30': 8, # DJI Zenmuse Z30
'yuneec e90': 44, # Yuneec E90
'gopro hero4 black': 30, # GoPro Hero 4 Black
@ -36,6 +41,10 @@ RS_DATABASE = {
'autel robotics xl724': 29, # Autel Nano+
'parrot anafi': 39, # Parrot Anafi
'autel robotics xt705': 30, # Autel EVO II pro
# Help us add more!
# See: https://github.com/OpenDroneMap/RSCalibration for instructions
}

Wyświetl plik

@ -66,11 +66,12 @@ def sighandler(signum, frame):
signal.signal(signal.SIGINT, sighandler)
signal.signal(signal.SIGTERM, sighandler)
def run(cmd, env_paths=[context.superbuild_bin_path], env_vars={}, packages_paths=context.python_packages_paths):
def run(cmd, env_paths=[context.superbuild_bin_path], env_vars={}, packages_paths=context.python_packages_paths, quiet=False):
"""Run a system command"""
global running_subprocesses
log.ODM_INFO('running %s' % cmd)
if not quiet:
log.ODM_INFO('running %s' % cmd)
env = os.environ.copy()
sep = ":"
@ -101,7 +102,8 @@ def run(cmd, env_paths=[context.superbuild_bin_path], env_vars={}, packages_path
retcode = p.wait()
log.logger.log_json_process(cmd, retcode, list(lines))
if not quiet:
log.logger.log_json_process(cmd, retcode, list(lines))
running_subprocesses.remove(p)
if retcode < 0:

Wyświetl plik

@ -1,7 +1,9 @@
from opendm import log
from opendm.thermal_tools import dji_unpack
import cv2
import os
from opendm import log
from opendm.thermal_tools import dji_unpack
from opendm.exiftool import extract_raw_thermal_image_data
from opendm.thermal_tools.thermal_utils import sensor_vals_to_temp
def resize_to_match(image, match_photo = None):
"""
@ -19,28 +21,26 @@ def resize_to_match(image, match_photo = None):
interpolation=cv2.INTER_LANCZOS4)
return image
def dn_to_temperature(photo, image, dataset_tree):
def dn_to_temperature(photo, image, images_path):
"""
Convert Digital Number values to temperature (C) values
:param photo ODM_Photo
:param image numpy array containing image data
:param dataset_tree path to original source image to read data using PIL for DJI thermal photos
:param images_path path to original source image to read data using PIL for DJI thermal photos
:return numpy array with temperature (C) image values
"""
# Handle thermal bands
if photo.is_thermal():
# Every camera stores thermal information differently
# The following will work for MicaSense Altum cameras
# but not necessarily for others
if photo.camera_make == "MicaSense" and photo.camera_model == "Altum":
if photo.camera_make == "MicaSense" and photo.camera_model[:5] == "Altum":
image = image.astype("float32")
image -= (273.15 * 100.0) # Convert Kelvin to Celsius
image *= 0.01
return image
elif photo.camera_make == "DJI" and photo.camera_model == "ZH20T":
elif photo.camera_make == "DJI" and photo.camera_model == "ZH20T":
filename, file_extension = os.path.splitext(photo.filename)
# DJI H20T high gain mode supports measurement of -40~150 celsius degrees
if file_extension.lower() in [".tif", ".tiff"] and image.min() >= 23315: # Calibrated grayscale tif
@ -51,11 +51,18 @@ def dn_to_temperature(photo, image, dataset_tree):
else:
return image
elif photo.camera_make == "DJI" and photo.camera_model == "MAVIC2-ENTERPRISE-ADVANCED":
image = dji_unpack.extract_temperatures_dji(photo, image, dataset_tree)
image = dji_unpack.extract_temperatures_dji(photo, image, images_path)
image = image.astype("float32")
return image
else:
log.ODM_WARNING("Unsupported camera [%s %s], thermal band will have digital numbers." % (photo.camera_make, photo.camera_model))
try:
params, image = extract_raw_thermal_image_data(os.path.join(images_path, photo.filename))
image = sensor_vals_to_temp(image, **params)
except Exception as e:
log.ODM_WARNING("Cannot radiometrically calibrate %s: %s" % (photo.filename, str(e)))
image = image.astype("float32")
return image
else:
image = image.astype("float32")
log.ODM_WARNING("Tried to radiometrically calibrate a non-thermal image with temperature values (%s)" % photo.filename)

Wyświetl plik

@ -1,271 +0,0 @@
"""
THIS IS WIP, DON'T USE THIS FILE, IT IS HERE FOR FURTHER IMPROVEMENT
Tools for extracting thermal data from FLIR images.
Derived from https://bitbucket.org/nimmerwoner/flyr/src/master/
"""
import os
from io import BufferedIOBase, BytesIO
from typing import BinaryIO, Dict, Optional, Tuple, Union
import numpy as np
from PIL import Image
# Constants
SEGMENT_SEP = b"\xff"
APP1_MARKER = b"\xe1"
MAGIC_FLIR_DEF = b"FLIR\x00"
CHUNK_APP1_BYTES_COUNT = len(APP1_MARKER)
CHUNK_LENGTH_BYTES_COUNT = 2
CHUNK_MAGIC_BYTES_COUNT = len(MAGIC_FLIR_DEF)
CHUNK_SKIP_BYTES_COUNT = 1
CHUNK_NUM_BYTES_COUNT = 1
CHUNK_TOT_BYTES_COUNT = 1
CHUNK_PARTIAL_METADATA_LENGTH = CHUNK_APP1_BYTES_COUNT + CHUNK_LENGTH_BYTES_COUNT + CHUNK_MAGIC_BYTES_COUNT
CHUNK_METADATA_LENGTH = (
CHUNK_PARTIAL_METADATA_LENGTH + CHUNK_SKIP_BYTES_COUNT + CHUNK_NUM_BYTES_COUNT + CHUNK_TOT_BYTES_COUNT
)
def unpack(path_or_stream: Union[str, BinaryIO]) -> np.ndarray:
"""Unpacks the FLIR image, meaning that it will return the thermal data embedded in the image.
Parameters
----------
path_or_stream : Union[str, BinaryIO]
Either a path (string) to a FLIR file, or a byte stream such as
BytesIO or file opened as `open(file_path, "rb")`.
Returns
-------
FlyrThermogram
When successful, a FlyrThermogram object containing thermogram data.
"""
if isinstance(path_or_stream, str) and os.path.isfile(path_or_stream):
with open(path_or_stream, "rb") as flirh:
return unpack(flirh)
elif isinstance(path_or_stream, BufferedIOBase):
stream = path_or_stream
flir_app1_stream = extract_flir_app1(stream)
flir_records = parse_flir_app1(flir_app1_stream)
raw_np = parse_thermal(flir_app1_stream, flir_records)
return raw_np
else:
raise ValueError("Incorrect input")
def extract_flir_app1(stream: BinaryIO) -> BinaryIO:
"""Extracts the FLIR APP1 bytes.
Parameters
---------
stream : BinaryIO
A full bytes stream of a JPEG file, expected to be a FLIR file.
Raises
------
ValueError
When the file is invalid in one the next ways, a
ValueError is thrown.
* File is not a JPEG
* A FLIR chunk number occurs more than once
* The total chunks count is inconsistent over multiple chunks
* No APP1 segments are successfully parsed
Returns
-------
BinaryIO
A bytes stream of the APP1 FLIR segments
"""
# Check JPEG-ness
_ = stream.read(2)
chunks_count: Optional[int] = None
chunks: Dict[int, bytes] = {}
while True:
b = stream.read(1)
if b == b"":
break
if b != SEGMENT_SEP:
continue
parsed_chunk = parse_flir_chunk(stream, chunks_count)
if not parsed_chunk:
continue
chunks_count, chunk_num, chunk = parsed_chunk
chunk_exists = chunks.get(chunk_num, None) is not None
if chunk_exists:
raise ValueError("Invalid FLIR: duplicate chunk number")
chunks[chunk_num] = chunk
# Encountered all chunks, break out of loop to process found metadata
if chunk_num == chunks_count:
break
if chunks_count is None:
raise ValueError("Invalid FLIR: no metadata encountered")
flir_app1_bytes = b""
for chunk_num in range(chunks_count + 1):
flir_app1_bytes += chunks[chunk_num]
flir_app1_stream = BytesIO(flir_app1_bytes)
flir_app1_stream.seek(0)
return flir_app1_stream
def parse_flir_chunk(stream: BinaryIO, chunks_count: Optional[int]) -> Optional[Tuple[int, int, bytes]]:
"""Parse flir chunk."""
# Parse the chunk header. Headers are as follows (definition with example):
#
# \xff\xe1<length: 2 bytes>FLIR\x00\x01<chunk nr: 1 byte><chunk count: 1 byte>
# \xff\xe1\xff\xfeFLIR\x00\x01\x01\x0b
#
# Meaning: Exif APP1, 65534 long, FLIR chunk 1 out of 12
marker = stream.read(CHUNK_APP1_BYTES_COUNT)
length_bytes = stream.read(CHUNK_LENGTH_BYTES_COUNT)
length = int.from_bytes(length_bytes, "big")
length -= CHUNK_METADATA_LENGTH
magic_flir = stream.read(CHUNK_MAGIC_BYTES_COUNT)
if not (marker == APP1_MARKER and magic_flir == MAGIC_FLIR_DEF):
# Seek back to just after byte b and continue searching for chunks
stream.seek(-len(marker) - len(length_bytes) - len(magic_flir), 1)
return None
stream.seek(1, 1) # skip 1 byte, unsure what it is for
chunk_num = int.from_bytes(stream.read(CHUNK_NUM_BYTES_COUNT), "big")
chunks_tot = int.from_bytes(stream.read(CHUNK_TOT_BYTES_COUNT), "big")
# Remember total chunks to verify metadata consistency
if chunks_count is None:
chunks_count = chunks_tot
if ( # Check whether chunk metadata is consistent
chunks_tot is None or chunk_num < 0 or chunk_num > chunks_tot or chunks_tot != chunks_count
):
raise ValueError(f"Invalid FLIR: inconsistent total chunks, should be 0 or greater, but is {chunks_tot}")
return chunks_tot, chunk_num, stream.read(length + 1)
def parse_thermal(stream: BinaryIO, records: Dict[int, Tuple[int, int, int, int]]) -> np.ndarray:
"""Parse thermal."""
RECORD_IDX_RAW_DATA = 1
raw_data_md = records[RECORD_IDX_RAW_DATA]
_, _, raw_data = parse_raw_data(stream, raw_data_md)
return raw_data
def parse_flir_app1(stream: BinaryIO) -> Dict[int, Tuple[int, int, int, int]]:
"""Parse flir app1."""
# 0x00 - string[4] file format ID = "FFF\0"
# 0x04 - string[16] file creator: seen "\0","MTX IR\0","CAMCTRL\0"
# 0x14 - int32u file format version = 100
# 0x18 - int32u offset to record directory
# 0x1c - int32u number of entries in record directory
# 0x20 - int32u next free index ID = 2
# 0x24 - int16u swap pattern = 0 (?)
# 0x28 - int16u[7] spares
# 0x34 - int32u[2] reserved
# 0x3c - int32u checksum
# 1. Read 0x40 bytes and verify that its contents equals AFF\0 or FFF\0
_ = stream.read(4)
# 2. Read FLIR record directory metadata (ref 3)
stream.seek(16, 1)
_ = int.from_bytes(stream.read(4), "big")
record_dir_offset = int.from_bytes(stream.read(4), "big")
record_dir_entries_count = int.from_bytes(stream.read(4), "big")
stream.seek(28, 1)
_ = int.from_bytes(stream.read(4), "big")
# 3. Read record directory (which is a FLIR record entry repeated
# `record_dir_entries_count` times)
stream.seek(record_dir_offset)
record_dir_stream = BytesIO(stream.read(32 * record_dir_entries_count))
# First parse the record metadata
record_details: Dict[int, Tuple[int, int, int, int]] = {}
for record_nr in range(record_dir_entries_count):
record_dir_stream.seek(0)
details = parse_flir_record_metadata(stream, record_nr)
if details:
record_details[details[1]] = details
# Then parse the actual records
# for (entry_idx, type, offset, length) in record_details:
# parse_record = record_parsers[type]
# stream.seek(offset)
# record = BytesIO(stream.read(length + 36)) # + 36 needed to find end
# parse_record(record, offset, length)
return record_details
def parse_flir_record_metadata(stream: BinaryIO, record_nr: int) -> Optional[Tuple[int, int, int, int]]:
"""Parse flir record metadata."""
# FLIR record entry (ref 3):
# 0x00 - int16u record type
# 0x02 - int16u record subtype: RawData 1=BE, 2=LE, 3=PNG; 1 for other record types
# 0x04 - int32u record version: seen 0x64,0x66,0x67,0x68,0x6f,0x104
# 0x08 - int32u index id = 1
# 0x0c - int32u record offset from start of FLIR data
# 0x10 - int32u record length
# 0x14 - int32u parent = 0 (?)
# 0x18 - int32u object number = 0 (?)
# 0x1c - int32u checksum: 0 for no checksum
entry = 32 * record_nr
stream.seek(entry)
record_type = int.from_bytes(stream.read(2), "big")
if record_type < 1:
return None
_ = int.from_bytes(stream.read(2), "big")
_ = int.from_bytes(stream.read(4), "big")
_ = int.from_bytes(stream.read(4), "big")
record_offset = int.from_bytes(stream.read(4), "big")
record_length = int.from_bytes(stream.read(4), "big")
_ = int.from_bytes(stream.read(4), "big")
_ = int.from_bytes(stream.read(4), "big")
_ = int.from_bytes(stream.read(4), "big")
return (entry, record_type, record_offset, record_length)
def parse_raw_data(stream: BinaryIO, metadata: Tuple[int, int, int, int]):
"""Parse raw data."""
(_, _, offset, length) = metadata
stream.seek(offset)
stream.seek(2, 1)
width = int.from_bytes(stream.read(2), "little")
height = int.from_bytes(stream.read(2), "little")
stream.seek(offset + 32)
# Read the bytes with the raw thermal data and decode using PIL
thermal_bytes = stream.read(length)
thermal_stream = BytesIO(thermal_bytes)
thermal_img = Image.open(thermal_stream)
thermal_np = np.array(thermal_img)
# Check shape
if thermal_np.shape != (height, width):
msg = "Invalid FLIR: metadata's width and height don't match thermal data's actual width\
and height ({} vs ({}, {})"
msg = msg.format(thermal_np.shape, height, width)
raise ValueError(msg)
# FLIR PNG data is in the wrong byte order, fix that
fix_byte_order = np.vectorize(lambda x: (x >> 8) + ((x & 0x00FF) << 8))
thermal_np = fix_byte_order(thermal_np)
return width, height, thermal_np

Wyświetl plik

@ -1,16 +1,25 @@
import os
import sys
import math
from opendm import log
from opendm import system
from opendm import io
def generate_tiles(geotiff, output_dir, max_concurrency):
gdal2tiles = os.path.join(os.path.dirname(__file__), "gdal2tiles.py")
system.run('%s "%s" --processes %s -z 5-21 -n -w none "%s" "%s"' % (sys.executable, gdal2tiles, max_concurrency, geotiff, output_dir))
def generate_tiles(geotiff, output_dir, max_concurrency, resolution):
circumference_earth_cm = 2*math.pi*637_813_700
px_per_tile = 256
resolution_equator_cm = circumference_earth_cm/px_per_tile
zoom = math.ceil(math.log(resolution_equator_cm/resolution, 2))
def generate_orthophoto_tiles(geotiff, output_dir, max_concurrency):
min_zoom = 5 # 4.89 km/px
max_zoom = min(zoom, 23) # No deeper zoom than 23 (1.86 cm/px at equator)
gdal2tiles = os.path.join(os.path.dirname(__file__), "gdal2tiles.py")
system.run('%s "%s" --processes %s -z %s-%s -n -w none "%s" "%s"' % (sys.executable, gdal2tiles, max_concurrency, min_zoom, max_zoom, geotiff, output_dir))
def generate_orthophoto_tiles(geotiff, output_dir, max_concurrency, resolution):
try:
generate_tiles(geotiff, output_dir, max_concurrency)
generate_tiles(geotiff, output_dir, max_concurrency, resolution)
except Exception as e:
log.ODM_WARNING("Cannot generate orthophoto tiles: %s" % str(e))
@ -37,10 +46,10 @@ def generate_colored_hillshade(geotiff):
log.ODM_WARNING("Cannot generate colored hillshade: %s" % str(e))
return (None, None, None)
def generate_dem_tiles(geotiff, output_dir, max_concurrency):
def generate_dem_tiles(geotiff, output_dir, max_concurrency, resolution):
try:
colored_dem, hillshade_dem, colored_hillshade_dem = generate_colored_hillshade(geotiff)
generate_tiles(colored_hillshade_dem, output_dir, max_concurrency)
generate_tiles(colored_hillshade_dem, output_dir, max_concurrency, resolution)
# Cleanup
for f in [colored_dem, hillshade_dem, colored_hillshade_dem]:

Wyświetl plik

@ -13,6 +13,7 @@ from opendm import log
from opendm import io
from opendm import system
from opendm import context
from opendm import multispectral
from opendm.progress import progressbc
from opendm.photo import ODM_Photo
@ -27,7 +28,7 @@ class ODM_Reconstruction(object):
self.gcp = None
self.multi_camera = self.detect_multi_camera()
self.filter_photos()
def detect_multi_camera(self):
"""
Looks at the reconstruction photos and determines if this
@ -45,22 +46,88 @@ class ODM_Reconstruction(object):
band_photos[p.band_name].append(p)
bands_count = len(band_photos)
if bands_count >= 2 and bands_count <= 8:
# Band name with the minimum number of photos
max_band_name = None
max_photos = -1
for band_name in band_photos:
if len(band_photos[band_name]) > max_photos:
max_band_name = band_name
max_photos = len(band_photos[band_name])
if bands_count >= 2 and bands_count <= 10:
# Validate that all bands have the same number of images,
# otherwise this is not a multi-camera setup
img_per_band = len(band_photos[p.band_name])
for band in band_photos:
if len(band_photos[band]) != img_per_band:
log.ODM_ERROR("Multi-camera setup detected, but band \"%s\" (identified from \"%s\") has only %s images (instead of %s), perhaps images are missing or are corrupted. Please include all necessary files to process all bands and try again." % (band, band_photos[band][0].filename, len(band_photos[band]), img_per_band))
raise RuntimeError("Invalid multi-camera images")
img_per_band = len(band_photos[max_band_name])
mc = []
for band_name in band_indexes:
mc.append({'name': band_name, 'photos': band_photos[band_name]})
# Sort by band index
mc.sort(key=lambda x: band_indexes[x['name']])
filter_missing = False
for band in band_photos:
if len(band_photos[band]) < img_per_band:
log.ODM_WARNING("Multi-camera setup detected, but band \"%s\" (identified from \"%s\") has only %s images (instead of %s), perhaps images are missing or are corrupted." % (band, band_photos[band][0].filename, len(band_photos[band]), len(band_photos[max_band_name])))
filter_missing = True
if filter_missing:
# Calculate files to ignore
_, p2s = multispectral.compute_band_maps(mc, max_band_name)
max_files_per_band = 0
for filename in p2s:
max_files_per_band = max(max_files_per_band, len(p2s[filename]))
for filename in p2s:
if len(p2s[filename]) < max_files_per_band:
photos_to_remove = p2s[filename] + [p for p in self.photos if p.filename == filename]
for photo in photos_to_remove:
log.ODM_WARNING("Excluding %s" % photo.filename)
self.photos = [p for p in self.photos if p != photo]
for i in range(len(mc)):
mc[i]['photos'] = [p for p in mc[i]['photos'] if p != photo]
log.ODM_INFO("New image count: %s" % len(self.photos))
# We enforce a normalized band order for all bands that we can identify
# and rely on the manufacturer's band_indexes as a fallback for all others
normalized_band_order = {
'RGB': '0',
'REDGREENBLUE': '0',
'RED': '1',
'R': '1',
'GREEN': '2',
'G': '2',
'BLUE': '3',
'B': '3',
'NIR': '4',
'N': '4',
'REDEDGE': '5',
'RE': '5',
'PANCHRO': '6',
'LWIR': '7',
'L': '7',
}
for band_name in band_indexes:
if band_name.upper() not in normalized_band_order:
log.ODM_WARNING(f"Cannot identify order for {band_name} band, using manufacturer suggested index instead")
# Sort
mc.sort(key=lambda x: normalized_band_order.get(x['name'].upper(), '9' + band_indexes[x['name']]))
for c, d in enumerate(mc):
log.ODM_INFO(f"Band {c + 1}: {d['name']}")
return mc
return None
@ -82,6 +149,12 @@ class ODM_Reconstruction(object):
if 'rgb' in bands or 'redgreenblue' in bands:
if 'red' in bands and 'green' in bands and 'blue' in bands:
bands_to_remove.append(bands['rgb'] if 'rgb' in bands else bands['redgreenblue'])
# Mavic 3M's RGB camera lens are too different than the multispectral ones
# so we drop the RGB channel instead
elif self.photos[0].is_make_model("DJI", "M3M") and 'red' in bands and 'green' in bands:
bands_to_remove.append(bands['rgb'] if 'rgb' in bands else bands['redgreenblue'])
else:
for b in ['red', 'green', 'blue']:
if b in bands:

Wyświetl plik

@ -4,7 +4,7 @@ import json
from opendm import log
from opendm.photo import find_largest_photo_dims
from osgeo import gdal
from opendm.loghelpers import double_quote
from opendm.arghelpers import double_quote
class NumpyEncoder(json.JSONEncoder):
def default(self, obj):

Wyświetl plik

@ -54,8 +54,10 @@ class SrtFileParser:
if not self.gps_data:
for d in self.data:
lat, lon, alt = d.get('latitude'), d.get('longitude'), d.get('altitude')
if alt is None:
alt = 0
tm = d.get('start')
if lat is not None and lon is not None:
if self.ll_to_utm is None:
self.ll_to_utm, self.utm_to_ll = location.utm_transformers_from_ll(lon, lat)
@ -122,6 +124,25 @@ class SrtFileParser:
# 00:00:00,000 --> 00:00:01,000
# F/2.8, SS 206.14, ISO 150, EV 0, GPS (-82.6669, 27.7716, 10), D 2.80m, H 0.00m, H.S 0.00m/s, V.S 0.00m/s
# DJI Phantom4 RTK
# 36
# 00:00:35,000 --> 00:00:36,000
# F/6.3, SS 60, ISO 100, EV 0, RTK (120.083799, 30.213635, 28), HOME (120.084146, 30.214243, 103.55m), D 75.36m, H 76.19m, H.S 0.30m/s, V.S 0.00m/s, F.PRY (-5.3°, 2.1°, 28.3°), G.PRY (-40.0°, 0.0°, 28.2°)
# DJI Unknown Model #1
# 1
# 00:00:00,000 --> 00:00:00,033
# <font size="28">SrtCnt : 1, DiffTime : 33ms
# 2024-01-18 10:23:26.397
# [iso : 150] [shutter : 1/5000.0] [fnum : 170] [ev : 0] [ct : 5023] [color_md : default] [focal_len : 240] [dzoom_ratio: 10000, delta:0],[latitude: -22.724555] [longitude: -47.602414] [rel_alt: 0.300 abs_alt: 549.679] </font>
# DJI Mavic 2 Zoom
# 1
# 00:00:00,000 --> 00:00:00,041
# <font size="36">FrameCnt : 1, DiffTime : 41ms
# 2023-07-15 11:55:16,320,933
# [iso : 100] [shutter : 1/400.0] [fnum : 280] [ev : 0] [ct : 5818] [color_md : default] [focal_len : 240] [latitude : 0.000000] [longtitude : 0.000000] [altitude: 0.000000] </font>
with open(self.filename, 'r') as f:
iso = None
@ -192,15 +213,21 @@ class SrtFileParser:
latitude = match_single([
("latitude: ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
("latitude : ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
("GPS \([\d\.\-]+,? ([\d\.\-]+),? [\d\.\-]+\)", lambda v: float(v) if v != 0 else None),
("RTK \([-+]?\d+\.\d+, (-?\d+\.\d+), -?\d+\)", lambda v: float(v) if v != 0 else None),
], line)
longitude = match_single([
("longitude: ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
("longtitude : ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
("GPS \(([\d\.\-]+),? [\d\.\-]+,? [\d\.\-]+\)", lambda v: float(v) if v != 0 else None),
("RTK \((-?\d+\.\d+), [-+]?\d+\.\d+, -?\d+\)", lambda v: float(v) if v != 0 else None),
], line)
altitude = match_single([
("altitude: ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
("GPS \([\d\.\-]+,? [\d\.\-]+,? ([\d\.\-]+)\)", lambda v: float(v) if v != 0 else None),
("RTK \([-+]?\d+\.\d+, [-+]?\d+\.\d+, (-?\d+)\)", lambda v: float(v) if v != 0 else None),
("abs_alt: ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
], line)

29
run.py
Wyświetl plik

@ -13,7 +13,7 @@ from opendm import system
from opendm import io
from opendm.progress import progressbc
from opendm.utils import get_processing_results_paths, rm_r
from opendm.loghelpers import args_to_dict
from opendm.arghelpers import args_to_dict, save_opts, compare_args, find_rerun_stage
from stages.odm_app import ODMApp
@ -29,20 +29,26 @@ if __name__ == '__main__':
log.ODM_INFO('Initializing ODM %s - %s' % (odm_version(), system.now()))
progressbc.set_project_name(args.name)
args.project_path = os.path.join(args.project_path, args.name)
if not io.dir_exists(args.project_path):
log.ODM_ERROR('Directory %s does not exist.' % args.name)
exit(1)
opts_json = os.path.join(args.project_path, "options.json")
auto_rerun_stage, opts_diff = find_rerun_stage(opts_json, args, config.rerun_stages, config.processopts)
if auto_rerun_stage is not None and len(auto_rerun_stage) > 0:
log.ODM_INFO("Rerunning from: %s" % auto_rerun_stage[0])
args.rerun_from = auto_rerun_stage
# Print args
args_dict = args_to_dict(args)
log.ODM_INFO('==============')
for k in args_dict.keys():
log.ODM_INFO('%s: %s' % (k, args_dict[k]))
log.ODM_INFO('%s: %s%s' % (k, args_dict[k], ' [changed]' if k in opts_diff else ''))
log.ODM_INFO('==============')
progressbc.set_project_name(args.name)
# Add project dir if doesn't exist
args.project_path = os.path.join(args.project_path, args.name)
if not io.dir_exists(args.project_path):
log.ODM_WARNING('Directory %s does not exist. Creating it now.' % args.name)
system.mkdir_p(os.path.abspath(args.project_path))
# If user asks to rerun everything, delete all of the existing progress directories.
if args.rerun_all:
@ -57,6 +63,9 @@ if __name__ == '__main__':
app = ODMApp(args)
retcode = app.execute()
if retcode == 0:
save_opts(opts_json, args)
# Do not show ASCII art for local submodels runs
if retcode == 0 and not "submodels" in args.project_path:

Wyświetl plik

@ -135,7 +135,7 @@ class ODMLoadDatasetStage(types.ODM_Stage):
"input": video_files,
"output": images_dir,
"blur_threshold": 300,
"blur_threshold": 200,
"distance_threshold": 10,
"black_ratio_threshold": 0.98,
"pixel_black_threshold": 0.30,

Wyświetl plik

@ -81,14 +81,11 @@ class ODMMvsTexStage(types.ODM_Stage):
# Format arguments to fit Mvs-Texturing app
skipGlobalSeamLeveling = ""
skipLocalSeamLeveling = ""
keepUnseenFaces = ""
nadir = ""
if args.texturing_skip_global_seam_leveling:
skipGlobalSeamLeveling = "--skip_global_seam_leveling"
if args.texturing_skip_local_seam_leveling:
skipLocalSeamLeveling = "--skip_local_seam_leveling"
if args.texturing_keep_unseen_faces:
keepUnseenFaces = "--keep_unseen_faces"
if (r['nadir']):
@ -102,7 +99,6 @@ class ODMMvsTexStage(types.ODM_Stage):
'dataTerm': 'gmi',
'outlierRemovalType': 'gauss_clamping',
'skipGlobalSeamLeveling': skipGlobalSeamLeveling,
'skipLocalSeamLeveling': skipLocalSeamLeveling,
'keepUnseenFaces': keepUnseenFaces,
'toneMapping': 'none',
'nadirMode': nadir,
@ -114,7 +110,7 @@ class ODMMvsTexStage(types.ODM_Stage):
mvs_tmp_dir = os.path.join(r['out_dir'], 'tmp')
# Make sure tmp directory is empty
# mvstex creates a tmp directory, so make sure it is empty
if io.dir_exists(mvs_tmp_dir):
log.ODM_INFO("Removing old tmp directory {}".format(mvs_tmp_dir))
shutil.rmtree(mvs_tmp_dir)
@ -125,7 +121,6 @@ class ODMMvsTexStage(types.ODM_Stage):
'-t {toneMapping} '
'{intermediate} '
'{skipGlobalSeamLeveling} '
'{skipLocalSeamLeveling} '
'{keepUnseenFaces} '
'{nadirMode} '
'{labelingFile} '

Wyświetl plik

@ -27,6 +27,7 @@ class ODMApp:
Initializes the application and defines the ODM application pipeline stages
"""
json_log_paths = [os.path.join(args.project_path, "log.json")]
if args.copy_to:
json_log_paths.append(args.copy_to)

Wyświetl plik

@ -12,7 +12,6 @@ from opendm.cropper import Cropper
from opendm import pseudogeo
from opendm.tiles.tiler import generate_dem_tiles
from opendm.cogeo import convert_to_cogeo
from opendm.opc import classify
class ODMDEMStage(types.ODM_Stage):
def process(self, args, outputs):
@ -35,7 +34,6 @@ class ODMDEMStage(types.ODM_Stage):
ignore_resolution=ignore_resolution and args.ignore_gsd,
has_gcp=reconstruction.has_gcp())
log.ODM_INFO('Classify: ' + str(args.pc_classify))
log.ODM_INFO('Create DSM: ' + str(args.dsm))
log.ODM_INFO('Create DTM: ' + str(args.dtm))
log.ODM_INFO('DEM input file {0} found: {1}'.format(dem_input, str(pc_model_found)))
@ -45,34 +43,9 @@ class ODMDEMStage(types.ODM_Stage):
if not io.dir_exists(odm_dem_root):
system.mkdir_p(odm_dem_root)
if args.pc_classify and pc_model_found:
pc_classify_marker = os.path.join(odm_dem_root, 'pc_classify_done.txt')
if not io.file_exists(pc_classify_marker) or self.rerun():
log.ODM_INFO("Classifying {} using Simple Morphological Filter (1/2)".format(dem_input))
commands.classify(dem_input,
args.smrf_scalar,
args.smrf_slope,
args.smrf_threshold,
args.smrf_window
)
log.ODM_INFO("Classifying {} using OpenPointClass (2/2)".format(dem_input))
classify(dem_input, args.max_concurrency)
with open(pc_classify_marker, 'w') as f:
f.write('Classify: smrf\n')
f.write('Scalar: {}\n'.format(args.smrf_scalar))
f.write('Slope: {}\n'.format(args.smrf_slope))
f.write('Threshold: {}\n'.format(args.smrf_threshold))
f.write('Window: {}\n'.format(args.smrf_window))
progress = 20
self.update_progress(progress)
if args.pc_rectify:
commands.rectify(dem_input)
# Do we need to process anything here?
if (args.dsm or args.dtm) and pc_model_found:
dsm_output_filename = os.path.join(odm_dem_root, 'dsm.tif')
@ -100,7 +73,8 @@ class ODMDEMStage(types.ODM_Stage):
resolution=resolution / 100.0,
decimation=args.dem_decimation,
max_workers=args.max_concurrency,
keep_unfilled_copy=args.dem_euclidean_map
with_euclidean_map=args.dem_euclidean_map,
max_tiles=None if reconstruction.has_geotagged_photos() else math.ceil(len(reconstruction.photos) / 2)
)
dem_geotiff_path = os.path.join(odm_dem_root, "{}.tif".format(product))
@ -110,27 +84,16 @@ class ODMDEMStage(types.ODM_Stage):
# Crop DEM
Cropper.crop(bounds_file_path, dem_geotiff_path, utils.get_dem_vars(args), keep_original=not args.optimize_disk_space)
if args.dem_euclidean_map:
unfilled_dem_path = io.related_file_path(dem_geotiff_path, postfix=".unfilled")
if args.crop > 0 or args.boundary:
# Crop unfilled DEM
Cropper.crop(bounds_file_path, unfilled_dem_path, utils.get_dem_vars(args), keep_original=not args.optimize_disk_space)
commands.compute_euclidean_map(unfilled_dem_path,
io.related_file_path(dem_geotiff_path, postfix=".euclideand"),
overwrite=True)
if pseudo_georeference:
pseudogeo.add_pseudo_georeferencing(dem_geotiff_path)
if args.tiles:
generate_dem_tiles(dem_geotiff_path, tree.path("%s_tiles" % product), args.max_concurrency)
generate_dem_tiles(dem_geotiff_path, tree.path("%s_tiles" % product), args.max_concurrency, resolution)
if args.cog:
convert_to_cogeo(dem_geotiff_path, max_workers=args.max_concurrency)
progress += 30
progress += 40
self.update_progress(progress)
else:
log.ODM_WARNING('Found existing outputs in: %s' % odm_dem_root)

Wyświetl plik

@ -36,7 +36,7 @@ class ODMFilterPoints(types.ODM_Stage):
else:
avg_gsd = gsd.opensfm_reconstruction_average_gsd(tree.opensfm_reconstruction)
if avg_gsd is not None:
boundary_distance = avg_gsd * 20 # 20 is arbitrary
boundary_distance = avg_gsd * 100 # 100 is arbitrary
if boundary_distance is not None:
outputs['boundary'] = compute_boundary_from_shots(tree.opensfm_reconstruction, boundary_distance, reconstruction.get_proj_offset())

Wyświetl plik

@ -5,6 +5,7 @@ import pipes
import fiona
import fiona.crs
import json
import zipfile
from collections import OrderedDict
from pyproj import CRS
@ -32,6 +33,7 @@ class ODMGeoreferencingStage(types.ODM_Stage):
gcp_export_file = tree.path("odm_georeferencing", "ground_control_points.gpkg")
gcp_gml_export_file = tree.path("odm_georeferencing", "ground_control_points.gml")
gcp_geojson_export_file = tree.path("odm_georeferencing", "ground_control_points.geojson")
gcp_geojson_zip_export_file = tree.path("odm_georeferencing", "ground_control_points.zip")
unaligned_model = io.related_file_path(tree.odm_georeferencing_model_laz, postfix="_unaligned")
if os.path.isfile(unaligned_model) and self.rerun():
os.unlink(unaligned_model)
@ -54,7 +56,7 @@ class ODMGeoreferencingStage(types.ODM_Stage):
}
# Write GeoPackage
with fiona.open(gcp_export_file, 'w', driver="GPKG",
with fiona.open(gcp_export_file, 'w', driver="GPKG",
crs=fiona.crs.from_string(reconstruction.georef.proj4()),
schema=gcp_schema) as f:
for gcp in gcps:
@ -72,13 +74,13 @@ class ODMGeoreferencingStage(types.ODM_Stage):
('error_z', gcp['error'][2]),
])
})
# Write GML
try:
system.run('ogr2ogr -of GML "{}" "{}"'.format(gcp_gml_export_file, gcp_export_file))
except Exception as e:
log.ODM_WARNING("Cannot generate ground control points GML file: %s" % str(e))
# Write GeoJSON
geojson = {
'type': 'FeatureCollection',
@ -101,42 +103,48 @@ class ODMGeoreferencingStage(types.ODM_Stage):
},
'properties': properties
})
with open(gcp_geojson_export_file, 'w') as f:
f.write(json.dumps(geojson, indent=4))
with zipfile.ZipFile(gcp_geojson_zip_export_file, 'w', compression=zipfile.ZIP_LZMA) as f:
f.write(gcp_geojson_export_file, arcname=os.path.basename(gcp_geojson_export_file))
else:
log.ODM_WARNING("GCPs could not be loaded for writing to %s" % gcp_export_file)
if not io.file_exists(tree.odm_georeferencing_model_laz) or self.rerun():
cmd = ('pdal translate -i "%s" -o \"%s\"' % (tree.filtered_point_cloud, tree.odm_georeferencing_model_laz))
cmd = f'pdal translate -i "{tree.filtered_point_cloud}" -o \"{tree.odm_georeferencing_model_laz}\"'
stages = ["ferry"]
params = [
'--filters.ferry.dimensions="views => UserData"',
'--writers.las.compression="lazip"',
'--filters.ferry.dimensions="views => UserData"'
]
if reconstruction.is_georeferenced():
log.ODM_INFO("Georeferencing point cloud")
stages.append("transformation")
utmoffset = reconstruction.georef.utm_offset()
params += [
'--filters.transformation.matrix="1 0 0 %s 0 1 0 %s 0 0 1 0 0 0 0 1"' % reconstruction.georef.utm_offset(),
'--writers.las.offset_x=%s' % reconstruction.georef.utm_east_offset,
'--writers.las.offset_y=%s' % reconstruction.georef.utm_north_offset,
f'--filters.transformation.matrix="1 0 0 {utmoffset[0]} 0 1 0 {utmoffset[1]} 0 0 1 0 0 0 0 1"',
f'--writers.las.offset_x={reconstruction.georef.utm_east_offset}' ,
f'--writers.las.offset_y={reconstruction.georef.utm_north_offset}',
'--writers.las.scale_x=0.001',
'--writers.las.scale_y=0.001',
'--writers.las.scale_z=0.001',
'--writers.las.offset_z=0',
'--writers.las.a_srs="%s"' % reconstruction.georef.proj4()
f'--writers.las.a_srs="{reconstruction.georef.proj4()}"' # HOBU this should maybe be WKT
]
if reconstruction.has_gcp() and io.file_exists(gcp_gml_export_file):
log.ODM_INFO("Embedding GCP info in point cloud")
params += [
'--writers.las.vlrs="{\\\"filename\\\": \\\"%s\\\", \\\"user_id\\\": \\\"ODM_GCP\\\", \\\"description\\\": \\\"Ground Control Points (GML)\\\"}"' % gcp_gml_export_file.replace(os.sep, "/")
]
if reconstruction.has_gcp() and io.file_exists(gcp_geojson_zip_export_file):
if os.path.getsize(gcp_geojson_zip_export_file) <= 65535:
log.ODM_INFO("Embedding GCP info in point cloud")
params += [
'--writers.las.vlrs="{\\\"filename\\\": \\\"%s\\\", \\\"user_id\\\": \\\"ODM\\\", \\\"record_id\\\": 2, \\\"description\\\": \\\"Ground Control Points (zip)\\\"}"' % gcp_geojson_zip_export_file.replace(os.sep, "/")
]
else:
log.ODM_WARNING("Cannot embed GCP info in point cloud, %s is too large" % gcp_geojson_zip_export_file)
system.run(cmd + ' ' + ' '.join(stages) + ' ' + ' '.join(params))
self.update_progress(50)
@ -144,27 +152,27 @@ class ODMGeoreferencingStage(types.ODM_Stage):
if args.crop > 0:
log.ODM_INFO("Calculating cropping area and generating bounds shapefile from point cloud")
cropper = Cropper(tree.odm_georeferencing, 'odm_georeferenced_model')
if args.fast_orthophoto:
decimation_step = 4
else:
decimation_step = 40
# More aggressive decimation for large datasets
if not args.fast_orthophoto:
decimation_step *= int(len(reconstruction.photos) / 1000) + 1
decimation_step = min(decimation_step, 95)
try:
cropper.create_bounds_gpkg(tree.odm_georeferencing_model_laz, args.crop,
cropper.create_bounds_gpkg(tree.odm_georeferencing_model_laz, args.crop,
decimation_step=decimation_step)
except:
log.ODM_WARNING("Cannot calculate crop bounds! We will skip cropping")
args.crop = 0
if 'boundary' in outputs and args.crop == 0:
log.ODM_INFO("Using boundary JSON as cropping area")
bounds_base, _ = os.path.splitext(tree.odm_georeferencing_model_laz)
bounds_json = bounds_base + ".bounds.geojson"
bounds_gpkg = bounds_base + ".bounds.gpkg"
@ -207,8 +215,7 @@ class ODMGeoreferencingStage(types.ODM_Stage):
os.rename(unaligned_model, tree.odm_georeferencing_model_laz)
# Align textured models
for texturing in [tree.odm_texturing, tree.odm_25dtexturing]:
obj = os.path.join(texturing, "odm_textured_model_geo.obj")
def transform_textured_model(obj):
if os.path.isfile(obj):
unaligned_obj = io.related_file_path(obj, postfix="_unaligned")
if os.path.isfile(unaligned_obj):
@ -220,7 +227,18 @@ class ODMGeoreferencingStage(types.ODM_Stage):
except Exception as e:
log.ODM_WARNING("Cannot transform textured model: %s" % str(e))
os.rename(unaligned_obj, obj)
for texturing in [tree.odm_texturing, tree.odm_25dtexturing]:
if reconstruction.multi_camera:
primary = get_primary_band_name(reconstruction.multi_camera, args.primary_band)
for band in reconstruction.multi_camera:
subdir = "" if band['name'] == primary else band['name'].lower()
obj = os.path.join(texturing, subdir, "odm_textured_model_geo.obj")
transform_textured_model(obj)
else:
obj = os.path.join(texturing, "odm_textured_model_geo.obj")
transform_textured_model(obj)
with open(tree.odm_georeferencing_alignment_matrix, "w") as f:
f.write(np_to_json(a_matrix))
else:
@ -234,8 +252,8 @@ class ODMGeoreferencingStage(types.ODM_Stage):
else:
log.ODM_WARNING('Found a valid georeferenced model in: %s'
% tree.odm_georeferencing_model_laz)
if args.optimize_disk_space and io.file_exists(tree.odm_georeferencing_model_laz) and io.file_exists(tree.filtered_point_cloud):
os.remove(tree.filtered_point_cloud)

Wyświetl plik

@ -59,7 +59,8 @@ class ODMeshingStage(types.ODM_Stage):
samples=self.params.get('samples'),
available_cores=args.max_concurrency,
method='poisson' if args.fast_orthophoto else 'gridded',
smooth_dsm=True)
smooth_dsm=True,
max_tiles=None if reconstruction.has_geotagged_photos() else math.ceil(len(reconstruction.photos) / 2))
else:
log.ODM_WARNING('Found a valid ODM 2.5D Mesh file in: %s' %
tree.odm_25dmesh)

Wyświetl plik

@ -7,7 +7,8 @@ from opendm import context
from opendm import types
from opendm import gsd
from opendm import orthophoto
from opendm.concurrency import get_max_memory
from opendm.osfm import is_submodel
from opendm.concurrency import get_max_memory_mb
from opendm.cutline import compute_cutline
from opendm.utils import double_quote
from opendm import pseudogeo
@ -28,10 +29,10 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
if not io.file_exists(tree.odm_orthophoto_tif) or self.rerun():
resolution = 1.0 / (gsd.cap_resolution(args.orthophoto_resolution, tree.opensfm_reconstruction,
ignore_gsd=args.ignore_gsd,
ignore_resolution=(not reconstruction.is_georeferenced()) and args.ignore_gsd,
has_gcp=reconstruction.has_gcp()) / 100.0)
resolution = gsd.cap_resolution(args.orthophoto_resolution, tree.opensfm_reconstruction,
ignore_gsd=args.ignore_gsd,
ignore_resolution=(not reconstruction.is_georeferenced()) and args.ignore_gsd,
has_gcp=reconstruction.has_gcp())
# odm_orthophoto definitions
kwargs = {
@ -39,9 +40,14 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
'log': tree.odm_orthophoto_log,
'ortho': tree.odm_orthophoto_render,
'corners': tree.odm_orthophoto_corners,
'res': resolution,
'res': 1.0 / (resolution/100.0),
'bands': '',
'depth_idx': ''
'depth_idx': '',
'inpaint': '',
'utm_offsets': '',
'a_srs': '',
'vars': '',
'gdal_configs': '--config GDAL_CACHEMAX %s' % (get_max_memory_mb() * 1024 * 1024)
}
models = []
@ -79,59 +85,37 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
else:
models.append(os.path.join(base_dir, model_file))
# Perform edge inpainting on georeferenced RGB datasets
if reconstruction.is_georeferenced():
kwargs['inpaint'] = "-inpaintThreshold 1.0"
# Thermal dataset with single band
if reconstruction.photos[0].band_name.upper() == "LWIR":
kwargs['bands'] = '-bands lwir'
kwargs['models'] = ','.join(map(double_quote, models))
if reconstruction.is_georeferenced():
orthophoto_vars = orthophoto.get_orthophoto_vars(args)
kwargs['utm_offsets'] = "-utm_north_offset %s -utm_east_offset %s" % (reconstruction.georef.utm_north_offset, reconstruction.georef.utm_east_offset)
kwargs['a_srs'] = "-a_srs \"%s\"" % reconstruction.georef.proj4()
kwargs['vars'] = ' '.join(['-co %s=%s' % (k, orthophoto_vars[k]) for k in orthophoto_vars])
kwargs['ortho'] = tree.odm_orthophoto_tif # Render directly to final file
# run odm_orthophoto
log.ODM_INFO('Creating GeoTIFF')
system.run('"{odm_ortho_bin}" -inputFiles {models} '
'-logFile "{log}" -outputFile "{ortho}" -resolution {res} -verbose '
'-outputCornerFile "{corners}" {bands} {depth_idx}'.format(**kwargs))
'-outputCornerFile "{corners}" {bands} {depth_idx} {inpaint} '
'{utm_offsets} {a_srs} {vars} {gdal_configs} '.format(**kwargs), env_vars={'OMP_NUM_THREADS': args.max_concurrency})
# Create georeferenced GeoTiff
geotiffcreated = False
if reconstruction.is_georeferenced():
ulx = uly = lrx = lry = 0.0
with open(tree.odm_orthophoto_corners) as f:
for lineNumber, line in enumerate(f):
if lineNumber == 0:
tokens = line.split(' ')
if len(tokens) == 4:
ulx = float(tokens[0]) + \
float(reconstruction.georef.utm_east_offset)
lry = float(tokens[1]) + \
float(reconstruction.georef.utm_north_offset)
lrx = float(tokens[2]) + \
float(reconstruction.georef.utm_east_offset)
uly = float(tokens[3]) + \
float(reconstruction.georef.utm_north_offset)
log.ODM_INFO('Creating GeoTIFF')
orthophoto_vars = orthophoto.get_orthophoto_vars(args)
kwargs = {
'ulx': ulx,
'uly': uly,
'lrx': lrx,
'lry': lry,
'vars': ' '.join(['-co %s=%s' % (k, orthophoto_vars[k]) for k in orthophoto_vars]),
'proj': reconstruction.georef.proj4(),
'input': tree.odm_orthophoto_render,
'output': tree.odm_orthophoto_tif,
'log': tree.odm_orthophoto_tif_log,
'max_memory': get_max_memory(),
}
system.run('gdal_translate -a_ullr {ulx} {uly} {lrx} {lry} '
'{vars} '
'-a_srs \"{proj}\" '
'--config GDAL_CACHEMAX {max_memory}% '
'--config GDAL_TIFF_INTERNAL_MASK YES '
'"{input}" "{output}" > "{log}"'.format(**kwargs))
bounds_file_path = os.path.join(tree.odm_georeferencing, 'odm_georeferenced_model.bounds.gpkg')
# Cutline computation, before cropping
# We want to use the full orthophoto, not the cropped one.
submodel_run = is_submodel(tree.opensfm)
if args.orthophoto_cutline:
cutline_file = os.path.join(tree.odm_orthophoto, "cutline.gpkg")
@ -140,22 +124,24 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
cutline_file,
args.max_concurrency,
scale=0.25)
if submodel_run:
orthophoto.compute_mask_raster(tree.odm_orthophoto_tif, cutline_file,
os.path.join(tree.odm_orthophoto, "odm_orthophoto_cut.tif"),
blend_distance=20, only_max_coords_feature=True)
else:
log.ODM_INFO("Not a submodel run, skipping mask raster generation")
orthophoto.compute_mask_raster(tree.odm_orthophoto_tif, cutline_file,
os.path.join(tree.odm_orthophoto, "odm_orthophoto_cut.tif"),
blend_distance=20, only_max_coords_feature=True)
orthophoto.post_orthophoto_steps(args, bounds_file_path, tree.odm_orthophoto_tif, tree.orthophoto_tiles)
orthophoto.post_orthophoto_steps(args, bounds_file_path, tree.odm_orthophoto_tif, tree.orthophoto_tiles, resolution)
# Generate feathered orthophoto also
if args.orthophoto_cutline:
if args.orthophoto_cutline and submodel_run:
orthophoto.feather_raster(tree.odm_orthophoto_tif,
os.path.join(tree.odm_orthophoto, "odm_orthophoto_feathered.tif"),
blend_distance=20
)
geotiffcreated = True
if not geotiffcreated:
else:
if io.file_exists(tree.odm_orthophoto_render):
pseudogeo.add_pseudo_georeferencing(tree.odm_orthophoto_render)
log.ODM_INFO("Renaming %s --> %s" % (tree.odm_orthophoto_render, tree.odm_orthophoto_tif))

Wyświetl plik

@ -19,6 +19,7 @@ class ODMOpenMVSStage(types.ODM_Stage):
reconstruction = outputs['reconstruction']
photos = reconstruction.photos
octx = OSFMContext(tree.opensfm)
pc_tile = False
if not photos:
raise system.ExitException('Not enough photos in photos array to start OpenMVS')
@ -64,12 +65,13 @@ class ODMOpenMVSStage(types.ODM_Stage):
filter_point_th = -20
config = [
" --resolution-level %s" % int(resolution_level),
"--resolution-level %s" % int(resolution_level),
'--dense-config-file "%s"' % densify_ini_file,
"--max-resolution %s" % int(outputs['undist_image_max_size']),
"--max-threads %s" % args.max_concurrency,
"--number-views-fuse %s" % number_views_fuse,
"--sub-resolution-levels %s" % subres_levels,
"--archive-type 3",
'-w "%s"' % depthmaps_dir,
"-v 0"
]
@ -77,14 +79,10 @@ class ODMOpenMVSStage(types.ODM_Stage):
gpu_config = []
use_gpu = has_gpu(args)
if use_gpu:
#gpu_config.append("--cuda-device -3")
gpu_config.append("--cuda-device -1")
else:
gpu_config.append("--cuda-device -2")
if args.pc_tile:
config.append("--fusion-mode 1")
extra_config = []
if args.pc_skip_geometric:
@ -96,12 +94,13 @@ class ODMOpenMVSStage(types.ODM_Stage):
extra_config.append("--ignore-mask-label 0")
with open(densify_ini_file, 'w+') as f:
f.write("Optimize = 7\n")
f.write("Optimize = 7\nMin Views Filter = 1\n")
def run_densify():
system.run('"%s" "%s" %s' % (context.omvs_densify_path,
openmvs_scene_file,
' '.join(config + gpu_config + extra_config)))
try:
run_densify()
except system.SubprocessException as e:
@ -111,9 +110,9 @@ class ODMOpenMVSStage(types.ODM_Stage):
log.ODM_WARNING("OpenMVS failed with GPU, is your graphics card driver up to date? Falling back to CPU.")
gpu_config = ["--cuda-device -2"]
run_densify()
elif (e.errorCode == 137 or e.errorCode == 3221226505) and not args.pc_tile:
elif (e.errorCode == 137 or e.errorCode == 143 or e.errorCode == 3221226505) and not pc_tile:
log.ODM_WARNING("OpenMVS ran out of memory, we're going to turn on tiling to see if we can process this.")
args.pc_tile = True
pc_tile = True
config.append("--fusion-mode 1")
run_densify()
else:
@ -123,15 +122,15 @@ class ODMOpenMVSStage(types.ODM_Stage):
files_to_remove = []
scene_dense = os.path.join(tree.openmvs, 'scene_dense.mvs')
if args.pc_tile:
if pc_tile:
log.ODM_INFO("Computing sub-scenes")
subscene_densify_ini_file = os.path.join(tree.openmvs, 'subscene-config.ini')
with open(subscene_densify_ini_file, 'w+') as f:
f.write("Optimize = 0\n")
f.write("Optimize = 0\nEstimation Geometric Iters = 0\nMin Views Filter = 1\n")
config = [
"--sub-scene-area 660000",
"--sub-scene-area 660000", # 8000
"--max-threads %s" % args.max_concurrency,
'-w "%s"' % depthmaps_dir,
"-v 0",
@ -162,9 +161,13 @@ class ODMOpenMVSStage(types.ODM_Stage):
config = [
'--resolution-level %s' % int(resolution_level),
'--max-resolution %s' % int(outputs['undist_image_max_size']),
"--sub-resolution-levels %s" % subres_levels,
'--dense-config-file "%s"' % subscene_densify_ini_file,
'--number-views-fuse %s' % number_views_fuse,
'--max-threads %s' % args.max_concurrency,
'--archive-type 3',
'--postprocess-dmaps 0',
'--geometric-iters 0',
'-w "%s"' % depthmaps_dir,
'-v 0',
]
@ -180,7 +183,7 @@ class ODMOpenMVSStage(types.ODM_Stage):
else:
# Filter
if args.pc_filter > 0:
system.run('"%s" "%s" --filter-point-cloud %s -v 0 %s' % (context.omvs_densify_path, scene_dense_mvs, filter_point_th, ' '.join(gpu_config)))
system.run('"%s" "%s" --filter-point-cloud %s -v 0 --archive-type 3 %s' % (context.omvs_densify_path, scene_dense_mvs, filter_point_th, ' '.join(gpu_config)))
else:
# Just rename
log.ODM_INFO("Skipped filtering, %s --> %s" % (scene_ply_unfiltered, scene_ply))
@ -220,7 +223,7 @@ class ODMOpenMVSStage(types.ODM_Stage):
try:
system.run('"%s" %s' % (context.omvs_densify_path, ' '.join(config + gpu_config + extra_config)))
except system.SubprocessException as e:
if e.errorCode == 137 or e.errorCode == 3221226505:
if e.errorCode == 137 or e.errorCode == 143 or e.errorCode == 3221226505:
log.ODM_WARNING("OpenMVS filtering ran out of memory, visibility checks will be skipped.")
skip_filtering()
else:

Wyświetl plik

@ -35,7 +35,7 @@ class ODMOpenSfMStage(types.ODM_Stage):
octx.feature_matching(self.rerun())
self.update_progress(30)
octx.create_tracks(self.rerun())
octx.reconstruct(args.rolling_shutter, reconstruction.is_georeferenced(), self.rerun())
octx.reconstruct(args.rolling_shutter, reconstruction.is_georeferenced() and (not args.sfm_no_partial), self.rerun())
octx.extract_cameras(tree.path("cameras.json"), self.rerun())
self.update_progress(70)

Wyświetl plik

@ -132,7 +132,7 @@ class ODMSplitStage(types.ODM_Stage):
log.ODM_INFO("Reconstructing %s" % sp)
local_sp_octx = OSFMContext(sp)
local_sp_octx.create_tracks(self.rerun())
local_sp_octx.reconstruct(args.rolling_shutter, True, self.rerun())
local_sp_octx.reconstruct(args.rolling_shutter, not args.sfm_no_partial, self.rerun())
else:
lre = LocalRemoteExecutor(args.sm_cluster, args.rolling_shutter, self.rerun())
lre.set_projects([os.path.abspath(os.path.join(p, "..")) for p in submodel_paths])
@ -266,7 +266,7 @@ class ODMMergeStage(types.ODM_Stage):
orthophoto_vars = orthophoto.get_orthophoto_vars(args)
orthophoto.merge(all_orthos_and_ortho_cuts, tree.odm_orthophoto_tif, orthophoto_vars)
orthophoto.post_orthophoto_steps(args, merged_bounds_file, tree.odm_orthophoto_tif, tree.orthophoto_tiles)
orthophoto.post_orthophoto_steps(args, merged_bounds_file, tree.odm_orthophoto_tif, tree.orthophoto_tiles, args.orthophoto_resolution)
elif len(all_orthos_and_ortho_cuts) == 1:
# Simply copy
log.ODM_WARNING("A single orthophoto/cutline pair was found between all submodels.")
@ -306,7 +306,7 @@ class ODMMergeStage(types.ODM_Stage):
log.ODM_INFO("Created %s" % dem_file)
if args.tiles:
generate_dem_tiles(dem_file, tree.path("%s_tiles" % human_name.lower()), args.max_concurrency)
generate_dem_tiles(dem_file, tree.path("%s_tiles" % human_name.lower()), args.max_concurrency, args.dem_resolution)
if args.cog:
convert_to_cogeo(dem_file, max_workers=args.max_concurrency)

Wyświetl plik

@ -67,15 +67,15 @@ platform="Linux" # Assumed
uname=$(uname)
case $uname in
"Darwin")
platform="MacOS / OSX"
platform="MacOS"
;;
MINGW*)
platform="Windows"
;;
esac
if [[ $platform != "Linux" ]]; then
echo "This script only works on Linux."
if [[ $platform != "Linux" && $platform != "MacOS" ]]; then
echo "This script only works on Linux and MacOS."
exit 1
fi