Porównaj commity

...

92 Commity

Autor SHA1 Wiadomość Data
Piero Toffanin ae6726e536
Merge pull request #1760 from pierotofy/fastcut
Skip feathered raster generation when possible
2024-05-17 15:51:32 -04:00
Piero Toffanin 6da366f806 Windows fix 2024-05-17 15:23:10 -04:00
Piero Toffanin e4e27c21f2 Skip feathered raster generation when possible 2024-05-17 14:55:26 -04:00
Piero Toffanin f9136f7a0d
Merge pull request #1758 from idimitrovski/master
Support for DJI Mavic 2 Zoom srt files
2024-05-10 11:24:52 -04:00
idimitrovski a2d9eccad5 Support for DJI Mavic 2 Zoom srt files 2024-05-10 09:29:37 +02:00
Piero Toffanin 424d9e28a0
Merge pull request #1756 from andrewharvey/patch-1
Fix PoissonRecon failed with n threads log message
2024-04-18 11:53:46 -04:00
Andrew Harvey a0fbd71d41
Fix PoissonRecon failed with n threads log message
The message was reporting failure with n threads and retrying with n // 2, however a few lines up threads was already set to n // 2 representing the next thread count to try.
2024-04-18 15:35:53 +10:00
Piero Toffanin 6084d1dca0
Merge pull request #1754 from pierotofy/minviews
Use min views filter = 1
2024-04-11 23:18:26 -04:00
Piero Toffanin aef4182cf9 More solid OpenMVS clustering fallback 2024-04-11 14:18:20 -04:00
Piero Toffanin 6c0fe6e79d Bump version 2024-04-10 22:07:36 -04:00
Piero Toffanin 17dfc7599a Update pc-filter value to 5 2024-04-10 13:48:11 -04:00
Piero Toffanin a70e7445ad Update default feature-type, pc-filter values 2024-04-10 12:26:34 -04:00
Piero Toffanin 981bf88b48 Use min views filter = 1 2024-04-10 11:13:58 -04:00
Piero Toffanin ad63392e1a
Merge pull request #1752 from pierotofy/geobom
Fix BOM encoding bug with geo files
2024-04-02 12:53:35 -04:00
Piero Toffanin 77f8ffc8cd Fix BOM encoding bug with geo files 2024-04-02 12:46:20 -04:00
Piero Toffanin 4d7cf32a8c
Merge pull request #1751 from smathermather/fish-aye
replace fisheye with fisheye_opencv but keep API the same until 4.0
2024-03-11 23:32:19 -04:00
Stephen Mather 5a439c0ab6 replace fisheye with fisheye_opencv but keep API the same until 4.0 2024-03-11 22:56:48 -04:00
Piero Toffanin ffcda0dc57
Merge pull request #1749 from smathermather/increase-default-GPS-Accuracy
increase default GPS-Accuracy to 3m
2024-03-08 22:16:44 -05:00
Stephen Mather 2c6fd1dd9f
increase default GPS-Accuracy to 3m 2024-03-08 22:13:54 -05:00
Sylvain POULAIN cb3229a3d4
Add Mavic 3 rolling shutter, not enterprise version (#1747)
* Add Mavic 3 rolling shutter

* M3
2024-02-12 09:50:22 -05:00
Piero Toffanin fc9c94880f
Merge pull request #1746 from kielnino/set-extensionsused
GLTF - obj2glb - Set extensionsUsed in all cases to be consistent with the GLTF standard
2024-02-09 10:23:22 -05:00
kielnino b204a2eb98
set extensionsUsed in all cases 2024-02-09 15:06:02 +01:00
Piero Toffanin d9f77bea54
Merge pull request #1744 from kielnino/remove-unuses-mvs_tmp_dir
Update comment on mvs_tmp_dir
2024-02-01 09:14:23 -05:00
kielnino 10947ecddf
clarify usage of tmp directory 2024-02-01 12:02:06 +01:00
kielnino f7c7044823
remove unused mvs_tmp_dir 2024-02-01 09:25:10 +01:00
Piero Toffanin ae50133886
Merge pull request #1742 from pierotofy/eptclass
Classify point cloud before generating derivative outputs
2024-01-25 12:56:36 -05:00
Piero Toffanin 9fd3bf3edd Improve SRT parser to handle abs_alt altitude reference 2024-01-23 22:24:38 +00:00
Piero Toffanin fb85b754fb Classify point cloud before generating derivative outputs 2024-01-23 17:03:36 -05:00
Piero Toffanin 30f89c068c
Merge pull request #1739 from pierotofy/smartband
Fix build
2024-01-15 19:41:20 -05:00
Piero Toffanin 260b4ef864 Manually install numpy 2024-01-15 16:21:08 -05:00
Piero Toffanin fb5d88366e
Merge pull request #1738 from pierotofy/smartband
Ignore multispectral band groups that are missing images
2024-01-15 11:38:53 -05:00
Piero Toffanin f793627402 Ignore multispectral band groups that are missing images 2024-01-15 09:51:17 -05:00
Piero Toffanin 9183218f1b Bump version 2024-01-12 00:20:35 -05:00
Piero Toffanin 1283df206e
Merge pull request #1732 from OpenDroneMap/nolocalseam
Deprecate texturing-skip-local-seam-leveling
2023-12-11 15:25:51 -05:00
Piero Toffanin 76a061b86a Deprecate texturing-skip-local-seam-leveling 2023-12-11 14:57:21 -05:00
Piero Toffanin 32d933027e
Merge pull request #1731 from pierotofy/median
C++ median smoothing filter
2023-12-08 11:34:05 -05:00
Piero Toffanin a29280157e Add radius parameter 2023-12-07 16:12:15 -05:00
Piero Toffanin 704c285b8f Remove eigen dep 2023-12-07 15:58:12 -05:00
Piero Toffanin 5674e68e9f Median filtering using fastrasterfilter 2023-12-07 18:49:43 +00:00
Piero Toffanin d419d9f038
Merge pull request #1729 from pierotofy/corridor
dem2mesh improvements
2023-12-06 19:34:47 -05:00
Piero Toffanin b3ae35f5e5 Update dem2mesh 2023-12-06 13:52:24 -05:00
Piero Toffanin 18d4d31be7 Fix pc2dem.py 2023-12-06 12:34:07 -05:00
Piero Toffanin 16ccd277ec
Merge pull request #1728 from pierotofy/corridor
Improved DEM generation efficiency
2023-12-05 14:06:54 -05:00
Piero Toffanin 7048868f28 Improved DEM generation efficiency 2023-12-05 14:01:15 -05:00
Piero Toffanin b14ffd919a Remove need for one intermediate raster 2023-12-05 12:26:14 -05:00
Piero Toffanin 4d1d0350a5
Update issue-triage.yml 2023-12-05 10:12:06 -05:00
Piero Toffanin 7261c29efc Respect max_tile_size parameter 2023-12-04 22:49:06 -05:00
Piero Toffanin 2ccad6ee9d Fix renderdem bounds calculation 2023-12-04 22:26:04 -05:00
Piero Toffanin 6acf9835e5 Update issue-triage.yml 2023-11-29 23:34:52 -05:00
Piero Toffanin 5b5df3aaf7 Add issue-trage.yml 2023-11-29 23:33:36 -05:00
Piero Toffanin 26cc9fbf93
Merge pull request #1725 from pierotofy/renderdem
Render DEM tiles using RenderDEM
2023-11-28 13:10:35 -05:00
Piero Toffanin b08f955963 Use URL 2023-11-28 11:33:09 -05:00
Piero Toffanin d028873f63 Use PDAL fork 2023-11-28 11:24:50 -05:00
Piero Toffanin 2d2b809530 Set maxTiles check only in absence of georeferenced photos 2023-11-28 00:43:11 -05:00
Piero Toffanin 7e05a5b04e Minor fix 2023-11-27 16:34:08 -05:00
Piero Toffanin e0ab6ae7ed Bump version 2023-11-27 16:25:11 -05:00
Piero Toffanin eceae8d2e4 Render DEM tiles using RenderDEM 2023-11-27 16:20:21 -05:00
Piero Toffanin 55570385c1
Merge pull request #1720 from pierotofy/autorerun
Feat: Auto rerun-from
2023-11-13 13:42:04 -05:00
Piero Toffanin eed840c9bb Always auto-rerun from beginning with split 2023-11-13 13:40:55 -05:00
Piero Toffanin 8376f24f08 Remove duplicate stmt 2023-11-08 11:40:22 -05:00
Piero Toffanin 6d70a4f0be Fix processopts slice 2023-11-08 11:14:15 -05:00
Piero Toffanin 6df5e0b711 Feat: Auto rerun-from 2023-11-08 11:07:20 -05:00
Piero Toffanin 5d9564fda3
Merge pull request #1717 from pierotofy/fcp
Pin Eigen 3.4
2023-11-05 15:52:28 -05:00
Piero Toffanin eccb203d7a Pin eigen34 2023-11-05 15:45:12 -05:00
Piero Toffanin 2df4afaecf
Merge pull request #1716 from pierotofy/fcp
Fix fast_floor in FPC Filter, Invalid PLY file (expected 'property uint8 views')
2023-11-04 20:22:44 -04:00
Piero Toffanin e5ed68846e Fix OpenMVS subscene logic 2023-11-04 20:00:57 -04:00
Piero Toffanin 7cf71628f3 Fix fast_floor in FPC Filter 2023-11-04 13:32:40 -04:00
Piero Toffanin 237bf8fb87 Remove snap build 2023-10-30 00:39:09 -04:00
Piero Toffanin a542e7b78d
Merge pull request #1714 from pierotofy/dsp
Adaptive feature quality
2023-10-30 00:36:54 -04:00
Piero Toffanin 52fa5d12e6 Adaptive feature quality 2023-10-29 19:19:20 -04:00
Piero Toffanin e3296f0379
Merge pull request #1712 from pierotofy/dsp
Adds support for DSP SIFT
2023-10-29 18:29:11 -04:00
Piero Toffanin a06f6f19b2 Update OpenSfM 2023-10-29 17:54:12 -04:00
Piero Toffanin 2d94934595 Adds support for DSP SIFT 2023-10-27 22:33:43 -04:00
Piero Toffanin 08d03905e6
Merge pull request #1705 from MertenF/master
Make tiler zoom level configurable
2023-10-16 12:30:29 -04:00
Merten Fermont f70e55c9eb
Limit maximum tiler zoom level to 23 2023-10-16 09:59:48 +02:00
Merten Fermont a89803c2eb Use dem instead of orthophoto resolution for generating DEM tiles 2023-10-15 23:47:43 +02:00
Piero Toffanin de7595aeef
Merge pull request #1708 from pierotofy/reportmv
Add extra report file op, disable snap builds
2023-10-14 14:20:06 -04:00
Piero Toffanin aa0e9f68df Rename build file 2023-10-14 01:57:12 -04:00
Piero Toffanin 7ca122dbf6 Remove WSL install 2023-10-14 01:55:40 -04:00
Piero Toffanin 0d303aab16 Disable snap 2023-10-14 01:32:55 -04:00
Piero Toffanin 6dc0c98fa0 Remove previous report before move 2023-10-14 01:22:58 -04:00
Merten Fermont c679d400c8 Tiler zoom level is calculated from GSD
Instead of hardcoding a value, calculate the maximum zoomlevel in which there is still an increase in detail. By using the configured orthophoto resolution or GSD.

The higher the latitude, the higher the resolution will be of the tile.
Resulting in a chance of generating useless tiles, as there is no compensation for this.
At the moment it'll use the worst-case resolution from the equator.

Zoom level calulation from: https://wiki.openstreetmap.org/wiki/Zoom_levels
2023-10-12 22:22:10 +02:00
Piero Toffanin 38af615657
Merge pull request #1704 from pierotofy/gpurunner
Run GPU build on self-hosted runner
2023-10-05 15:30:31 -04:00
Piero Toffanin fc8dd7c5c5 Run GPU build on self-hosted runner 2023-10-05 15:29:14 -04:00
Piero Toffanin 6eca279c4b
Merge pull request #1702 from pierotofy/altumpt
Altum-PT support
2023-10-04 13:51:48 -04:00
Piero Toffanin 681ee18925 Adds support for Altum-PT 2023-10-03 13:06:36 -04:00
Piero Toffanin f9a3c5eb0e
Merge pull request #1701 from pierotofy/windate
Fix start/end date on Windows and enforce band order normalization
2023-10-02 10:14:25 -04:00
Piero Toffanin a56b52d0df Pick green band by default, improve mavic 3M support 2023-09-29 13:54:01 -04:00
Piero Toffanin f6be28db2a Give RGB, Blue priority 2023-09-29 13:11:11 -04:00
Piero Toffanin 5988be1f57 Bump version 2023-09-29 13:05:06 -04:00
Piero Toffanin d9600741d1 Enforce band order normalization 2023-09-29 13:00:32 -04:00
Piero Toffanin 57c61d918d Fix start/end date on Windows 2023-09-29 12:11:19 -04:00
42 zmienionych plików z 644 dodań i 603 usunięć

Wyświetl plik

@ -0,0 +1,33 @@
name: Issue Triage
on:
issues:
types:
- opened
jobs:
issue_triage:
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- uses: pierotofy/issuewhiz@v1
with:
ghToken: ${{ secrets.GITHUB_TOKEN }}
openAI: ${{ secrets.OPENAI_TOKEN }}
filter: |
- "#"
variables: |
- Q: "A question about using a software or seeking guidance on doing something?"
- B: "Reporting an issue or a software bug?"
- P: "Describes an issue with processing a set of images or a particular dataset?"
- D: "Contains a link to a dataset or images?"
- E: "Contains a suggestion for an improvement or a feature request?"
- SC: "Describes an issue related to compiling or building source code?"
logic: |
- 'Q and (not B) and (not P) and (not E) and (not SC) and not (title_lowercase ~= ".*bug: .+")': [comment: "Could we move this conversation over to the forum at https://community.opendronemap.org? The forum is the right place to ask questions (we try to keep the GitHub issue tracker for feature requests and bugs only). Thank you!", close: true, stop: true]
- "B and (not P) and (not E) and (not SC)": [label: "software fault", stop: true]
- "P and D": [label: "possible software fault", stop: true]
- "P and (not D) and (not SC) and (not E)": [comment: "Thanks for the report, but it looks like you didn't include a copy of your dataset for us to reproduce this issue? Please make sure to follow our [issue guidelines](https://github.com/OpenDroneMap/ODM/blob/master/docs/issue_template.md) :pray: ", close: true, stop: true]
- "E": [label: enhancement, stop: true]
- "SC": [label: "possible software fault"]
signature: "p.s. I'm just an automated script, not a human being."

Wyświetl plik

@ -1,94 +0,0 @@
name: Publish Docker and WSL Images
on:
push:
branches:
- master
tags:
- v*
jobs:
build:
runs-on: self-hosted
timeout-minutes: 2880
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
with:
config-inline: |
[worker.oci]
max-parallelism = 1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Use the repository information of the checked-out code to format docker tags
- name: Docker meta
id: docker_meta
uses: crazy-max/ghaction-docker-meta@v1
with:
images: opendronemap/odm
tag-semver: |
{{version}}
- name: Build and push Docker image
id: docker_build
uses: docker/build-push-action@v2
with:
file: ./portable.Dockerfile
platforms: linux/amd64,linux/arm64
push: true
no-cache: true
tags: |
${{ steps.docker_meta.outputs.tags }}
opendronemap/odm:latest
- name: Export WSL image
id: wsl_export
run: |
docker pull opendronemap/odm
docker export $(docker create opendronemap/odm) --output odm-wsl-rootfs-amd64.tar.gz
gzip odm-wsl-rootfs-amd64.tar.gz
echo ::set-output name=amd64-rootfs::"odm-wsl-rootfs-amd64.tar.gz"
# Convert tag into a GitHub Release if we're building a tag
- name: Create Release
if: github.event_name == 'tag'
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: Release ${{ github.ref }}
draft: false
prerelease: false
# Upload the WSL image to the new Release if we're building a tag
- name: Upload amd64 Release Asset
if: github.event_name == 'tag'
id: upload-amd64-wsl-rootfs
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: ./${{ steps.wsl_export.outputs.amd64-rootfs }}
asset_name: ${{ steps.wsl_export.outputs.amd64-rootfs }}
asset_content_type: application/gzip
# Always archive the WSL rootfs
- name: Upload amd64 Artifact
uses: actions/upload-artifact@v2
with:
name: wsl-rootfs
path: ${{ steps.wsl_export.outputs.amd64-rootfs }}
- name: Docker image digest and WSL rootfs download URL
run: |
echo "Docker image digest: ${{ steps.docker_build.outputs.digest }}"
echo "WSL AMD64 rootfs URL: ${{ steps.upload-amd64-wsl-rootfs.browser_download_url }}"
# Trigger NodeODM build
- name: Dispatch NodeODM Build Event
id: nodeodm_dispatch
run: |
curl -X POST -u "${{secrets.PAT_USERNAME}}:${{secrets.PAT_TOKEN}}" -H "Accept: application/vnd.github.everest-preview+json" -H "Content-Type: application/json" https://api.github.com/repos/OpenDroneMap/NodeODM/actions/workflows/publish-docker.yaml/dispatches --data '{"ref": "master"}'

Wyświetl plik

@ -9,14 +9,11 @@ on:
jobs:
build:
runs-on: ubuntu-latest
runs-on: self-hosted
timeout-minutes: 2880
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set Swap Space
uses: pierotofy/set-swap-space@master
with:
swap-size-gb: 12
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx

Wyświetl plik

@ -0,0 +1,53 @@
name: Publish Docker and WSL Images
on:
push:
branches:
- master
tags:
- v*
jobs:
build:
runs-on: self-hosted
timeout-minutes: 2880
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
with:
config-inline: |
[worker.oci]
max-parallelism = 1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Use the repository information of the checked-out code to format docker tags
- name: Docker meta
id: docker_meta
uses: crazy-max/ghaction-docker-meta@v1
with:
images: opendronemap/odm
tag-semver: |
{{version}}
- name: Build and push Docker image
id: docker_build
uses: docker/build-push-action@v2
with:
file: ./portable.Dockerfile
platforms: linux/amd64,linux/arm64
push: true
no-cache: true
tags: |
${{ steps.docker_meta.outputs.tags }}
opendronemap/odm:latest
# Trigger NodeODM build
- name: Dispatch NodeODM Build Event
id: nodeodm_dispatch
run: |
curl -X POST -u "${{secrets.PAT_USERNAME}}:${{secrets.PAT_TOKEN}}" -H "Accept: application/vnd.github.everest-preview+json" -H "Content-Type: application/json" https://api.github.com/repos/OpenDroneMap/NodeODM/actions/workflows/publish-docker.yaml/dispatches --data '{"ref": "master"}'

Wyświetl plik

@ -1,51 +0,0 @@
name: Publish Snap
on:
push:
branches:
- master
tags:
- v**
jobs:
build-and-release:
runs-on: ubuntu-latest
strategy:
matrix:
architecture:
- amd64
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set Swap Space
uses: pierotofy/set-swap-space@master
with:
swap-size-gb: 12
- name: Build
id: build
uses: diddlesnaps/snapcraft-multiarch-action@v1
with:
architecture: ${{ matrix.architecture }}
- name: Publish unstable builds to Edge
if: github.ref == 'refs/heads/master'
uses: snapcore/action-publish@v1
with:
store_login: ${{ secrets.STORE_LOGIN }}
snap: ${{ steps.build.outputs.snap }}
release: edge
- name: Publish tagged prerelease builds to Beta
# These are identified by having a hyphen in the tag name, e.g.: v1.0.0-beta1
if: startsWith(github.ref, 'refs/tags/v') && contains(github.ref, '-')
uses: snapcore/action-publish@v1
with:
store_login: ${{ secrets.STORE_LOGIN }}
snap: ${{ steps.build.outputs.snap }}
release: beta
- name: Publish tagged stable or release-candidate builds to Candidate
# These are identified by NOT having a hyphen in the tag name, OR having "-RC" or "-rc" in the tag name.
if: startsWith(github.ref, 'refs/tags/v1') && ( ( ! contains(github.ref, '-') ) || contains(github.ref, '-RC') || contains(github.ref, '-rc') )
uses: snapcore/action-publish@v1
with:
store_login: ${{ secrets.STORE_LOGIN }}
snap: ${{ steps.build.outputs.snap }}
release: candidate

Wyświetl plik

@ -83,30 +83,6 @@ ODM can be installed natively on Windows. Just download the latest setup from th
run C:\Users\youruser\datasets\project [--additional --parameters --here]
```
## Snap Package
ODM is now available as a Snap Package from the Snap Store. To install you may use the Snap Store (available itself as a Snap Package) or the command line:
```bash
sudo snap install --edge opendronemap
```
To run, you will need a terminal window into which you can type:
```bash
opendronemap
# or
snap run opendronemap
# or
/snap/bin/opendronemap
```
Snap packages will be kept up-to-date automatically, so you don't need to update ODM manually.
## GPU Acceleration
ODM has support for doing SIFT feature extraction on a GPU, which is about 2x faster than the CPU on a typical consumer laptop. To use this feature, you need to use the `opendronemap/odm:gpu` docker image instead of `opendronemap/odm` and you need to pass the `--gpus all` flag:
@ -147,52 +123,6 @@ You're in good shape!
See https://github.com/NVIDIA/nvidia-docker and https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker for information on docker/NVIDIA setup.
## WSL or WSL2 Install
Note: This requires that you have installed WSL already by following [the instructions on Microsoft's Website](https://docs.microsoft.com/en-us/windows/wsl/install-win10).
You can run ODM via WSL or WSL2 by downloading the `rootfs.tar.gz` file from [the releases page on GitHub](https://github.com/OpenDroneMap/ODM/releases). Once you have the file saved to your `Downloads` folder in Windows, open a PowerShell or CMD window by right-clicking the Flag Menu (bottom left by default) and selecting "Windows PowerShell", or alternatively by using the [Windows Terminal from the Windows Store](https://www.microsoft.com/store/productId/9N0DX20HK701).
Inside a PowerShell window, or Windows Terminal running PowerShell, type the following:
```powershell
# PowerShell
wsl.exe --import ODM $env:APPDATA\ODM C:\path\to\your\Downloads\rootfs.tar.gz
```
Alternatively if you're using `CMD.exe` or the `CMD` support in Windows Terminal type:
```cmd
# CMD
wsl.exe --import ODM %APPDATA%\ODM C:\path\to\your\Downloads\rootfs.tar.gz
```
In either case, make sure you replace `C:\path\to\your\Downloads\rootfs.tar.gz` with the actual path to your `rootfs.tar.gz` file.
This will save a new Hard Disk image to your Windows `AppData` folder at `C:\Users\username\AppData\roaming\ODM` (where `username` is your Username in Windows), and will set-up a new WSL "distro" called `ODM`.
You may start the ODM distro by using the relevant option in the Windows Terminal (from the Windows Store) or by executing `wsl.exe -d ODM` in a PowerShell or CMD window.
ODM is installed to the distro's `/code` directory. You may execute it with:
```bash
/code/run.sh
```
### Updating ODM in WSL
The easiest way to update the installation of ODM is to download the new `rootfs.tar.gz` file and import it as another distro. You may then unregister the original instance the same way you delete ODM from WSL (see next heading).
### Deleting an ODM in WSL instance
```cmd
wsl.exe --unregister ODM
```
Finally you'll want to delete the files by using your Windows File Manager (Explorer) to navigate to `%APPDATA%`, find the `ODM` directory, and delete it by dragging it to the recycle bin. To permanently delete it empty the recycle bin.
If you have installed to a different directory by changing the `--import` command you ran to install you must use that directory name to delete the correct files. This is likely the case if you have multiple ODM installations or are updating an already-installed installation.
## Native Install (Ubuntu 21.04)
You can run ODM natively on Ubuntu 21.04 (although we don't recommend it):

Wyświetl plik

@ -179,6 +179,7 @@ set(custom_libs OpenSfM
Obj2Tiles
OpenPointClass
ExifTool
RenderDEM
)
externalproject_add(mve
@ -222,7 +223,7 @@ externalproject_add(poissonrecon
externalproject_add(dem2mesh
GIT_REPOSITORY https://github.com/OpenDroneMap/dem2mesh.git
GIT_TAG 313
GIT_TAG 334
PREFIX ${SB_BINARY_DIR}/dem2mesh
SOURCE_DIR ${SB_SOURCE_DIR}/dem2mesh
CMAKE_ARGS -DCMAKE_INSTALL_PREFIX:PATH=${SB_INSTALL_DIR}
@ -250,6 +251,15 @@ externalproject_add(odm_orthophoto
${WIN32_CMAKE_ARGS} ${WIN32_GDAL_ARGS}
)
externalproject_add(fastrasterfilter
GIT_REPOSITORY https://github.com/OpenDroneMap/FastRasterFilter.git
GIT_TAG main
PREFIX ${SB_BINARY_DIR}/fastrasterfilter
SOURCE_DIR ${SB_SOURCE_DIR}/fastrasterfilter
CMAKE_ARGS -DCMAKE_INSTALL_PREFIX:PATH=${SB_INSTALL_DIR}
${WIN32_CMAKE_ARGS} ${WIN32_GDAL_ARGS}
)
externalproject_add(lastools
GIT_REPOSITORY https://github.com/OpenDroneMap/LAStools.git
GIT_TAG 250

Wyświetl plik

@ -8,7 +8,7 @@ ExternalProject_Add(${_proj_name}
#--Download step--------------
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
GIT_REPOSITORY https://github.com/OpenDroneMap/FPCFilter
GIT_TAG 320
GIT_TAG 331
#--Update/Patch step----------
UPDATE_COMMAND ""
#--Configure step-------------

Wyświetl plik

@ -14,7 +14,7 @@ externalproject_add(vcg
externalproject_add(eigen34
GIT_REPOSITORY https://gitlab.com/libeigen/eigen.git
GIT_TAG 3.4
GIT_TAG 7176ae16238ded7fb5ed30a7f5215825b3abd134
UPDATE_COMMAND ""
SOURCE_DIR ${SB_SOURCE_DIR}/eigen34
CONFIGURE_COMMAND ""

Wyświetl plik

@ -25,7 +25,7 @@ ExternalProject_Add(${_proj_name}
#--Download step--------------
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
GIT_REPOSITORY https://github.com/OpenDroneMap/OpenSfM/
GIT_TAG 319
GIT_TAG 330
#--Update/Patch step----------
UPDATE_COMMAND git submodule update --init --recursive
#--Configure step-------------

Wyświetl plik

@ -16,7 +16,7 @@ ExternalProject_Add(${_proj_name}
STAMP_DIR ${_SB_BINARY_DIR}/stamp
#--Download step--------------
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
URL https://github.com/PDAL/PDAL/archive/refs/tags/2.4.3.zip
URL https://github.com/OpenDroneMap/PDAL/archive/refs/heads/333.zip
#--Update/Patch step----------
UPDATE_COMMAND ""
#--Configure step-------------

Wyświetl plik

@ -0,0 +1,30 @@
set(_proj_name renderdem)
set(_SB_BINARY_DIR "${SB_BINARY_DIR}/${_proj_name}")
ExternalProject_Add(${_proj_name}
DEPENDS pdal
PREFIX ${_SB_BINARY_DIR}
TMP_DIR ${_SB_BINARY_DIR}/tmp
STAMP_DIR ${_SB_BINARY_DIR}/stamp
#--Download step--------------
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
GIT_REPOSITORY https://github.com/OpenDroneMap/RenderDEM
GIT_TAG main
#--Update/Patch step----------
UPDATE_COMMAND ""
#--Configure step-------------
SOURCE_DIR ${SB_SOURCE_DIR}/${_proj_name}
CMAKE_ARGS
-DPDAL_DIR=${SB_INSTALL_DIR}/lib/cmake/PDAL
-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}
-DCMAKE_INSTALL_PREFIX:PATH=${SB_INSTALL_DIR}
${WIN32_CMAKE_ARGS}
#--Build step-----------------
BINARY_DIR ${_SB_BINARY_DIR}
#--Install step---------------
INSTALL_DIR ${SB_INSTALL_DIR}
#--Output logging-------------
LOG_DOWNLOAD OFF
LOG_CONFIGURE OFF
LOG_BUILD OFF
)

Wyświetl plik

@ -1 +1 @@
3.2.1
3.5.1

Wyświetl plik

@ -127,6 +127,9 @@ installreqs() {
installdepsfromsnapcraft build openmvs
set -e
# edt requires numpy to build
pip install --ignore-installed numpy==1.23.1
pip install --ignore-installed -r requirements.txt
#if [ ! -z "$GPU_INSTALL" ]; then
#fi

Wyświetl plik

@ -51,6 +51,5 @@ commands.create_dem(args.point_cloud,
outdir=outdir,
resolution=args.resolution,
decimation=1,
max_workers=multiprocessing.cpu_count(),
keep_unfilled_copy=False
max_workers=multiprocessing.cpu_count()
)

Wyświetl plik

@ -0,0 +1,76 @@
from opendm import log
from shlex import _find_unsafe
import json
import os
def double_quote(s):
"""Return a shell-escaped version of the string *s*."""
if not s:
return '""'
if _find_unsafe(s) is None:
return s
# use double quotes, and prefix double quotes with a \
# the string $"b is then quoted as "$\"b"
return '"' + s.replace('"', '\\\"') + '"'
def args_to_dict(args):
args_dict = vars(args)
result = {}
for k in sorted(args_dict.keys()):
# Skip _is_set keys
if k.endswith("_is_set"):
continue
# Don't leak token
if k == 'sm_cluster' and args_dict[k] is not None:
result[k] = True
else:
result[k] = args_dict[k]
return result
def save_opts(opts_json, args):
try:
with open(opts_json, "w", encoding='utf-8') as f:
f.write(json.dumps(args_to_dict(args)))
except Exception as e:
log.ODM_WARNING("Cannot save options to %s: %s" % (opts_json, str(e)))
def compare_args(opts_json, args, rerun_stages):
if not os.path.isfile(opts_json):
return {}
try:
diff = {}
with open(opts_json, "r", encoding="utf-8") as f:
prev_args = json.loads(f.read())
cur_args = args_to_dict(args)
for opt in cur_args:
cur_value = cur_args[opt]
prev_value = prev_args.get(opt, None)
stage = rerun_stages.get(opt, None)
if stage is not None and cur_value != prev_value:
diff[opt] = prev_value
return diff
except:
return {}
def find_rerun_stage(opts_json, args, rerun_stages, processopts):
# Find the proper rerun stage if one is not explicitly set
if not ('rerun_is_set' in args or 'rerun_from_is_set' in args or 'rerun_all_is_set' in args):
args_diff = compare_args(opts_json, args, rerun_stages)
if args_diff:
if 'split_is_set' in args:
return processopts[processopts.index('dataset'):], args_diff
try:
stage_idxs = [processopts.index(rerun_stages[opt]) for opt in args_diff.keys() if rerun_stages[opt] is not None]
return processopts[min(stage_idxs):], args_diff
except ValueError as e:
print(str(e))
return None, {}

Wyświetl plik

@ -13,6 +13,100 @@ processopts = ['dataset', 'split', 'merge', 'opensfm', 'openmvs', 'odm_filterpoi
'odm_meshing', 'mvs_texturing', 'odm_georeferencing',
'odm_dem', 'odm_orthophoto', 'odm_report', 'odm_postprocess']
rerun_stages = {
'3d_tiles': 'odm_postprocess',
'align': 'odm_georeferencing',
'auto_boundary': 'odm_filterpoints',
'auto_boundary_distance': 'odm_filterpoints',
'bg_removal': 'dataset',
'boundary': 'odm_filterpoints',
'build_overviews': 'odm_orthophoto',
'camera_lens': 'dataset',
'cameras': 'dataset',
'cog': 'odm_dem',
'copy_to': 'odm_postprocess',
'crop': 'odm_georeferencing',
'dem_decimation': 'odm_dem',
'dem_euclidean_map': 'odm_dem',
'dem_gapfill_steps': 'odm_dem',
'dem_resolution': 'odm_dem',
'dsm': 'odm_dem',
'dtm': 'odm_dem',
'end_with': None,
'fast_orthophoto': 'odm_filterpoints',
'feature_quality': 'opensfm',
'feature_type': 'opensfm',
'force_gps': 'opensfm',
'gcp': 'dataset',
'geo': 'dataset',
'gltf': 'mvs_texturing',
'gps_accuracy': 'dataset',
'help': None,
'ignore_gsd': 'opensfm',
'matcher_neighbors': 'opensfm',
'matcher_order': 'opensfm',
'matcher_type': 'opensfm',
'max_concurrency': None,
'merge': 'Merge',
'mesh_octree_depth': 'odm_meshing',
'mesh_size': 'odm_meshing',
'min_num_features': 'opensfm',
'name': None,
'no_gpu': None,
'optimize_disk_space': None,
'orthophoto_compression': 'odm_orthophoto',
'orthophoto_cutline': 'odm_orthophoto',
'orthophoto_kmz': 'odm_orthophoto',
'orthophoto_no_tiled': 'odm_orthophoto',
'orthophoto_png': 'odm_orthophoto',
'orthophoto_resolution': 'odm_orthophoto',
'pc_classify': 'odm_georeferencing',
'pc_copc': 'odm_georeferencing',
'pc_csv': 'odm_georeferencing',
'pc_ept': 'odm_georeferencing',
'pc_filter': 'openmvs',
'pc_las': 'odm_georeferencing',
'pc_quality': 'opensfm',
'pc_rectify': 'odm_georeferencing',
'pc_sample': 'odm_filterpoints',
'pc_skip_geometric': 'openmvs',
'primary_band': 'dataset',
'project_path': None,
'radiometric_calibration': 'opensfm',
'rerun': None,
'rerun_all': None,
'rerun_from': None,
'rolling_shutter': 'opensfm',
'rolling_shutter_readout': 'opensfm',
'sfm_algorithm': 'opensfm',
'sfm_no_partial': 'opensfm',
'skip_3dmodel': 'odm_meshing',
'skip_band_alignment': 'opensfm',
'skip_orthophoto': 'odm_orthophoto',
'skip_report': 'odm_report',
'sky_removal': 'dataset',
'sm_cluster': 'split',
'sm_no_align': 'split',
'smrf_scalar': 'odm_dem',
'smrf_slope': 'odm_dem',
'smrf_threshold': 'odm_dem',
'smrf_window': 'odm_dem',
'split': 'split',
'split_image_groups': 'split',
'split_overlap': 'split',
'texturing_keep_unseen_faces': 'mvs_texturing',
'texturing_single_material': 'mvs_texturing',
'texturing_skip_global_seam_leveling': 'mvs_texturing',
'tiles': 'odm_dem',
'use_3dmesh': 'mvs_texturing',
'use_exif': 'dataset',
'use_fixed_camera_params': 'opensfm',
'use_hybrid_bundle_adjustment': 'opensfm',
'version': None,
'video_limit': 'dataset',
'video_resolution': 'dataset',
}
with open(os.path.join(context.root_path, 'VERSION')) as version_file:
__version__ = version_file.read().strip()
@ -123,8 +217,8 @@ def config(argv=None, parser=None):
parser.add_argument('--feature-type',
metavar='<string>',
action=StoreValue,
default='sift',
choices=['akaze', 'hahog', 'orb', 'sift'],
default='dspsift',
choices=['akaze', 'dspsift', 'hahog', 'orb', 'sift'],
help=('Choose the algorithm for extracting keypoints and computing descriptors. '
'Can be one of: %(choices)s. Default: '
'%(default)s'))
@ -391,7 +485,7 @@ def config(argv=None, parser=None):
metavar='<positive float>',
action=StoreValue,
type=float,
default=2.5,
default=5,
help='Filters the point cloud by removing points that deviate more than N standard deviations from the local mean. Set to 0 to disable filtering. '
'Default: %(default)s')
@ -448,12 +542,6 @@ def config(argv=None, parser=None):
default=False,
help=('Skip normalization of colors across all images. Useful when processing radiometric data. Default: %(default)s'))
parser.add_argument('--texturing-skip-local-seam-leveling',
action=StoreTrue,
nargs=0,
default=False,
help='Skip the blending of colors near seams. Default: %(default)s')
parser.add_argument('--texturing-keep-unseen-faces',
action=StoreTrue,
nargs=0,
@ -750,7 +838,7 @@ def config(argv=None, parser=None):
type=float,
action=StoreValue,
metavar='<positive float>',
default=10,
default=3,
help='Set a value in meters for the GPS Dilution of Precision (DOP) '
'information for all images. If your images are tagged '
'with high precision GPS information (RTK), this value will be automatically '
@ -792,7 +880,7 @@ def config(argv=None, parser=None):
'Default: %(default)s'))
args, unknown = parser.parse_known_args(argv)
DEPRECATED = ["--verbose", "--debug", "--time", "--resize-to", "--depthmap-resolution", "--pc-geometric", "--texturing-data-term", "--texturing-outlier-removal-type", "--texturing-tone-mapping"]
DEPRECATED = ["--verbose", "--debug", "--time", "--resize-to", "--depthmap-resolution", "--pc-geometric", "--texturing-data-term", "--texturing-outlier-removal-type", "--texturing-tone-mapping", "--texturing-skip-local-seam-leveling"]
unknown_e = [p for p in unknown if p not in DEPRECATED]
if len(unknown_e) > 0:
raise parser.error("unrecognized arguments: %s" % " ".join(unknown_e))

Wyświetl plik

@ -5,23 +5,17 @@ import numpy
import math
import time
import shutil
import functools
import threading
import glob
import re
from joblib import delayed, Parallel
from opendm.system import run
from opendm import point_cloud
from opendm import io
from opendm import system
from opendm.concurrency import get_max_memory, parallel_map, get_total_memory
from scipy import ndimage
from datetime import datetime
from opendm.vendor.gdal_fillnodata import main as gdal_fillnodata
from opendm import log
try:
import Queue as queue
except:
import queue
import threading
from .ground_rectification.rectify import run_rectification
from . import pdal
@ -69,119 +63,51 @@ error = None
def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56'], gapfill=True,
outdir='', resolution=0.1, max_workers=1, max_tile_size=4096,
decimation=None, keep_unfilled_copy=False,
decimation=None, with_euclidean_map=False,
apply_smoothing=True, max_tiles=None):
""" Create DEM from multiple radii, and optionally gapfill """
global error
error = None
start = datetime.now()
if not os.path.exists(outdir):
log.ODM_INFO("Creating %s" % outdir)
os.mkdir(outdir)
extent = point_cloud.get_extent(input_point_cloud)
log.ODM_INFO("Point cloud bounds are [minx: %s, maxx: %s] [miny: %s, maxy: %s]" % (extent['minx'], extent['maxx'], extent['miny'], extent['maxy']))
ext_width = extent['maxx'] - extent['minx']
ext_height = extent['maxy'] - extent['miny']
w, h = (int(math.ceil(ext_width / float(resolution))),
int(math.ceil(ext_height / float(resolution))))
# Set a floor, no matter the resolution parameter
# (sometimes a wrongly estimated scale of the model can cause the resolution
# to be set unrealistically low, causing errors)
RES_FLOOR = 64
if w < RES_FLOOR and h < RES_FLOOR:
prev_w, prev_h = w, h
if w >= h:
w, h = (RES_FLOOR, int(math.ceil(ext_height / ext_width * RES_FLOOR)))
else:
w, h = (int(math.ceil(ext_width / ext_height * RES_FLOOR)), RES_FLOOR)
floor_ratio = prev_w / float(w)
resolution *= floor_ratio
radiuses = [str(float(r) * floor_ratio) for r in radiuses]
log.ODM_WARNING("Really low resolution DEM requested %s will set floor at %s pixels. Resolution changed to %s. The scale of this reconstruction might be off." % ((prev_w, prev_h), RES_FLOOR, resolution))
final_dem_pixels = w * h
num_splits = int(max(1, math.ceil(math.log(math.ceil(final_dem_pixels / float(max_tile_size * max_tile_size)))/math.log(2))))
num_tiles = num_splits * num_splits
log.ODM_INFO("DEM resolution is %s, max tile size is %s, will split DEM generation into %s tiles" % ((h, w), max_tile_size, num_tiles))
tile_bounds_width = ext_width / float(num_splits)
tile_bounds_height = ext_height / float(num_splits)
tiles = []
for r in radiuses:
minx = extent['minx']
for x in range(num_splits):
miny = extent['miny']
if x == num_splits - 1:
maxx = extent['maxx']
else:
maxx = minx + tile_bounds_width
for y in range(num_splits):
if y == num_splits - 1:
maxy = extent['maxy']
else:
maxy = miny + tile_bounds_height
filename = os.path.join(os.path.abspath(outdir), '%s_r%s_x%s_y%s.tif' % (dem_type, r, x, y))
tiles.append({
'radius': r,
'bounds': {
'minx': minx,
'maxx': maxx,
'miny': miny,
'maxy': maxy
},
'filename': filename
})
miny = maxy
minx = maxx
# Safety check
if max_tiles is not None:
if len(tiles) > max_tiles and (final_dem_pixels * 4) > get_total_memory():
raise system.ExitException("Max tiles limit exceeded (%s). This is a strong indicator that the reconstruction failed and we would probably run out of memory trying to process this" % max_tiles)
# Sort tiles by increasing radius
tiles.sort(key=lambda t: float(t['radius']), reverse=True)
def process_tile(q):
log.ODM_INFO("Generating %s (%s, radius: %s, resolution: %s)" % (q['filename'], output_type, q['radius'], resolution))
d = pdal.json_gdal_base(q['filename'], output_type, q['radius'], resolution, q['bounds'])
if dem_type == 'dtm':
d = pdal.json_add_classification_filter(d, 2)
if decimation is not None:
d = pdal.json_add_decimation_filter(d, decimation)
pdal.json_add_readers(d, [input_point_cloud])
pdal.run_pipeline(d)
parallel_map(process_tile, tiles, max_workers)
kwargs = {
'input': input_point_cloud,
'outdir': outdir,
'outputType': output_type,
'radiuses': ",".join(map(str, radiuses)),
'resolution': resolution,
'maxTiles': 0 if max_tiles is None else max_tiles,
'decimation': 1 if decimation is None else decimation,
'classification': 2 if dem_type == 'dtm' else -1,
'tileSize': max_tile_size
}
system.run('renderdem "{input}" '
'--outdir "{outdir}" '
'--output-type {outputType} '
'--radiuses {radiuses} '
'--resolution {resolution} '
'--max-tiles {maxTiles} '
'--decimation {decimation} '
'--classification {classification} '
'--tile-size {tileSize} '
'--force '.format(**kwargs), env_vars={'OMP_NUM_THREADS': max_workers})
output_file = "%s.tif" % dem_type
output_path = os.path.abspath(os.path.join(outdir, output_file))
# Verify tile results
for t in tiles:
if not os.path.exists(t['filename']):
raise Exception("Error creating %s, %s failed to be created" % (output_file, t['filename']))
# Fetch tiles
tiles = []
for p in glob.glob(os.path.join(os.path.abspath(outdir), "*.tif")):
filename = os.path.basename(p)
m = re.match("^r([\d\.]+)_x\d+_y\d+\.tif", filename)
if m is not None:
tiles.append({'filename': p, 'radius': float(m.group(1))})
if len(tiles) == 0:
raise system.ExitException("No DEM tiles were generated, something went wrong")
log.ODM_INFO("Generated %s tiles" % len(tiles))
# Sort tiles by decreasing radius
tiles.sort(key=lambda t: float(t['radius']), reverse=True)
# Create virtual raster
tiles_vrt_path = os.path.abspath(os.path.join(outdir, "tiles.vrt"))
@ -193,7 +119,6 @@ def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56']
run('gdalbuildvrt -input_file_list "%s" "%s" ' % (tiles_file_list, tiles_vrt_path))
merged_vrt_path = os.path.abspath(os.path.join(outdir, "merged.vrt"))
geotiff_tmp_path = os.path.abspath(os.path.join(outdir, 'tiles.tmp.tif'))
geotiff_small_path = os.path.abspath(os.path.join(outdir, 'tiles.small.tif'))
geotiff_small_filled_path = os.path.abspath(os.path.join(outdir, 'tiles.small_filled.tif'))
geotiff_path = os.path.abspath(os.path.join(outdir, 'tiles.tif'))
@ -205,7 +130,6 @@ def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56']
'tiles_vrt': tiles_vrt_path,
'merged_vrt': merged_vrt_path,
'geotiff': geotiff_path,
'geotiff_tmp': geotiff_tmp_path,
'geotiff_small': geotiff_small_path,
'geotiff_small_filled': geotiff_small_filled_path
}
@ -214,31 +138,27 @@ def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56']
# Sometimes, for some reason gdal_fillnodata.py
# behaves strangely when reading data directly from a .VRT
# so we need to convert to GeoTIFF first.
# Scale to 10% size
run('gdal_translate '
'-co NUM_THREADS={threads} '
'-co BIGTIFF=IF_SAFER '
'-co COMPRESS=DEFLATE '
'--config GDAL_CACHEMAX {max_memory}% '
'"{tiles_vrt}" "{geotiff_tmp}"'.format(**kwargs))
# Scale to 10% size
run('gdal_translate '
'-co NUM_THREADS={threads} '
'-co BIGTIFF=IF_SAFER '
'--config GDAL_CACHEMAX {max_memory}% '
'-outsize 10% 0 '
'"{geotiff_tmp}" "{geotiff_small}"'.format(**kwargs))
'-outsize 10% 0 '
'"{tiles_vrt}" "{geotiff_small}"'.format(**kwargs))
# Fill scaled
gdal_fillnodata(['.',
'-co', 'NUM_THREADS=%s' % kwargs['threads'],
'-co', 'BIGTIFF=IF_SAFER',
'-co', 'COMPRESS=DEFLATE',
'--config', 'GDAL_CACHE_MAX', str(kwargs['max_memory']) + '%',
'-b', '1',
'-of', 'GTiff',
kwargs['geotiff_small'], kwargs['geotiff_small_filled']])
# Merge filled scaled DEM with unfilled DEM using bilinear interpolation
run('gdalbuildvrt -resolution highest -r bilinear "%s" "%s" "%s"' % (merged_vrt_path, geotiff_small_filled_path, geotiff_tmp_path))
run('gdalbuildvrt -resolution highest -r bilinear "%s" "%s" "%s"' % (merged_vrt_path, geotiff_small_filled_path, tiles_vrt_path))
run('gdal_translate '
'-co NUM_THREADS={threads} '
'-co TILED=YES '
@ -261,14 +181,14 @@ def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56']
else:
os.replace(geotiff_path, output_path)
if os.path.exists(geotiff_tmp_path):
if not keep_unfilled_copy:
os.remove(geotiff_tmp_path)
else:
os.replace(geotiff_tmp_path, io.related_file_path(output_path, postfix=".unfilled"))
if os.path.exists(tiles_vrt_path):
if with_euclidean_map:
emap_path = io.related_file_path(output_path, postfix=".euclideand")
compute_euclidean_map(tiles_vrt_path, emap_path, overwrite=True)
for cleanup_file in [tiles_vrt_path, tiles_file_list, merged_vrt_path, geotiff_small_path, geotiff_small_filled_path]:
if os.path.exists(cleanup_file): os.remove(cleanup_file)
for t in tiles:
if os.path.exists(t['filename']): os.remove(t['filename'])
@ -284,12 +204,20 @@ def compute_euclidean_map(geotiff_path, output_path, overwrite=False):
with rasterio.open(geotiff_path) as f:
nodata = f.nodatavals[0]
if not os.path.exists(output_path) or overwrite:
if not os.path.isfile(output_path) or overwrite:
if os.path.isfile(output_path):
os.remove(output_path)
log.ODM_INFO("Computing euclidean distance: %s" % output_path)
if gdal_proximity is not None:
try:
gdal_proximity(['gdal_proximity.py', geotiff_path, output_path, '-values', str(nodata)])
gdal_proximity(['gdal_proximity.py',
geotiff_path, output_path, '-values', str(nodata),
'-co', 'TILED=YES',
'-co', 'BIGTIFF=IF_SAFER',
'-co', 'COMPRESS=DEFLATE',
])
except Exception as e:
log.ODM_WARNING("Cannot compute euclidean distance: %s" % str(e))
@ -305,106 +233,31 @@ def compute_euclidean_map(geotiff_path, output_path, overwrite=False):
return output_path
def median_smoothing(geotiff_path, output_path, smoothing_iterations=1, window_size=512, num_workers=1):
def median_smoothing(geotiff_path, output_path, window_size=512, num_workers=1, radius=4):
""" Apply median smoothing """
start = datetime.now()
if not os.path.exists(geotiff_path):
raise Exception('File %s does not exist!' % geotiff_path)
# Prepare temporary files
folder_path, output_filename = os.path.split(output_path)
basename, ext = os.path.splitext(output_filename)
output_dirty_in = os.path.join(folder_path, "{}.dirty_1{}".format(basename, ext))
output_dirty_out = os.path.join(folder_path, "{}.dirty_2{}".format(basename, ext))
log.ODM_INFO('Starting smoothing...')
with rasterio.open(geotiff_path, num_threads=num_workers) as img, rasterio.open(output_dirty_in, "w+", BIGTIFF="IF_SAFER", num_threads=num_workers, **img.profile) as imgout, rasterio.open(output_dirty_out, "w+", BIGTIFF="IF_SAFER", num_threads=num_workers, **img.profile) as imgout2:
nodata = img.nodatavals[0]
dtype = img.dtypes[0]
shape = img.shape
for i in range(smoothing_iterations):
log.ODM_INFO("Smoothing iteration %s" % str(i + 1))
rows, cols = numpy.meshgrid(numpy.arange(0, shape[0], window_size), numpy.arange(0, shape[1], window_size))
rows = rows.flatten()
cols = cols.flatten()
rows_end = numpy.minimum(rows + window_size, shape[0])
cols_end= numpy.minimum(cols + window_size, shape[1])
windows = numpy.dstack((rows, cols, rows_end, cols_end)).reshape(-1, 4)
filter = functools.partial(ndimage.median_filter, size=9, output=dtype, mode='nearest')
# We cannot read/write to the same file from multiple threads without causing race conditions.
# To safely read/write from multiple threads, we use a lock to protect the DatasetReader/Writer.
read_lock = threading.Lock()
write_lock = threading.Lock()
# threading backend and GIL released filter are important for memory efficiency and multi-core performance
Parallel(n_jobs=num_workers, backend='threading')(delayed(window_filter_2d)(img, imgout, nodata , window, 9, filter, read_lock, write_lock) for window in windows)
# Between each iteration we swap the input and output temporary files
#img_in, img_out = img_out, img_in
if (i == 0):
img = imgout
imgout = imgout2
else:
img, imgout = imgout, img
# If the number of iterations was even, we need to swap temporary files
if (smoothing_iterations % 2 != 0):
output_dirty_in, output_dirty_out = output_dirty_out, output_dirty_in
# Cleaning temporary files
if os.path.exists(output_dirty_out):
os.replace(output_dirty_out, output_path)
if os.path.exists(output_dirty_in):
os.remove(output_dirty_in)
kwargs = {
'input': geotiff_path,
'output': output_path,
'window': window_size,
'radius': radius,
}
system.run('fastrasterfilter "{input}" '
'--output "{output}" '
'--window-size {window} '
'--radius {radius} '
'--co TILED=YES '
'--co BIGTIFF=IF_SAFER '
'--co COMPRESS=DEFLATE '.format(**kwargs), env_vars={'OMP_NUM_THREADS': num_workers})
log.ODM_INFO('Completed smoothing to create %s in %s' % (output_path, datetime.now() - start))
return output_path
def window_filter_2d(img, imgout, nodata, window, kernel_size, filter, read_lock, write_lock):
"""
Apply a filter to dem within a window, expects to work with kernal based filters
:param img: path to the geotiff to filter
:param imgout: path to write the giltered geotiff to. It can be the same as img to do the modification in place.
:param window: the window to apply the filter, should be a list contains row start, col_start, row_end, col_end
:param kernel_size: the size of the kernel for the filter, works with odd numbers, need to test if it works with even numbers
:param filter: the filter function which takes a 2d array as input and filter results as output.
:param read_lock: threading lock for the read operation
:param write_lock: threading lock for the write operation
"""
shape = img.shape[:2]
if window[0] < 0 or window[1] < 0 or window[2] > shape[0] or window[3] > shape[1]:
raise Exception('Window is out of bounds')
expanded_window = [ max(0, window[0] - kernel_size // 2), max(0, window[1] - kernel_size // 2), min(shape[0], window[2] + kernel_size // 2), min(shape[1], window[3] + kernel_size // 2) ]
# Read input window
width = expanded_window[3] - expanded_window[1]
height = expanded_window[2] - expanded_window[0]
rasterio_window = rasterio.windows.Window(col_off=expanded_window[1], row_off=expanded_window[0], width=width, height=height)
with read_lock:
win_arr = img.read(indexes=1, window=rasterio_window)
# Should have a better way to handle nodata, similar to the way the filter algorithms handle the border (reflection, nearest, interpolation, etc).
# For now will follow the old approach to guarantee identical outputs
nodata_locs = win_arr == nodata
win_arr = filter(win_arr)
win_arr[nodata_locs] = nodata
win_arr = win_arr[window[0] - expanded_window[0] : window[2] - expanded_window[0], window[1] - expanded_window[1] : window[3] - expanded_window[1]]
# Write output window
width = window[3] - window[1]
height = window[2] - window[0]
rasterio_window = rasterio.windows.Window(col_off=window[1], row_off=window[0], width=width, height=height)
with write_lock:
imgout.write(win_arr, indexes=1, window=rasterio_window)
def get_dem_radius_steps(stats_file, steps, resolution, multiplier = 1.0):
radius_steps = [point_cloud.get_spacing(stats_file, resolution) * multiplier]
for _ in range(steps - 1):

Wyświetl plik

@ -12,6 +12,9 @@ class GeoFile:
with open(self.geo_path, 'r') as f:
contents = f.read().strip()
# Strip eventual BOM characters
contents = contents.replace('\ufeff', '')
lines = list(map(str.strip, contents.split('\n')))
if lines:

Wyświetl plik

@ -279,9 +279,10 @@ def obj2glb(input_obj, output_glb, rtc=(None, None), draco_compression=True, _in
)
gltf.extensionsRequired = ['KHR_materials_unlit']
gltf.extensionsUsed = ['KHR_materials_unlit']
if rtc != (None, None) and len(rtc) >= 2:
gltf.extensionsUsed = ['CESIUM_RTC', 'KHR_materials_unlit']
gltf.extensionsUsed.append('CESIUM_RTC')
gltf.extensions = {
'CESIUM_RTC': {
'center': [float(rtc[0]), float(rtc[1]), 0.0]

Wyświetl plik

@ -7,7 +7,7 @@ import dateutil.parser
import shutil
import multiprocessing
from opendm.loghelpers import double_quote, args_to_dict
from opendm.arghelpers import double_quote, args_to_dict
from vmem import virtual_memory
if sys.platform == 'win32' or os.getenv('no_ansiesc'):

Wyświetl plik

@ -1,28 +0,0 @@
from shlex import _find_unsafe
def double_quote(s):
"""Return a shell-escaped version of the string *s*."""
if not s:
return '""'
if _find_unsafe(s) is None:
return s
# use double quotes, and prefix double quotes with a \
# the string $"b is then quoted as "$\"b"
return '"' + s.replace('"', '\\\"') + '"'
def args_to_dict(args):
args_dict = vars(args)
result = {}
for k in sorted(args_dict.keys()):
# Skip _is_set keys
if k.endswith("_is_set"):
continue
# Don't leak token
if k == 'sm_cluster' and args_dict[k] is not None:
result[k] = True
else:
result[k] = args_dict[k]
return result

Wyświetl plik

@ -187,7 +187,7 @@ def screened_poisson_reconstruction(inPointCloud, outMesh, depth = 8, samples =
if threads < 1:
break
else:
log.ODM_WARNING("PoissonRecon failed with %s threads, let's retry with %s..." % (threads, threads // 2))
log.ODM_WARNING("PoissonRecon failed with %s threads, let's retry with %s..." % (threads * 2, threads))
# Cleanup and reduce vertex count if necessary

Wyświetl plik

@ -181,8 +181,13 @@ def get_primary_band_name(multi_camera, user_band_name):
if len(multi_camera) < 1:
raise Exception("Invalid multi_camera list")
# multi_camera is already sorted by band_index
# Pick RGB, or Green, or Blue, in this order, if available, otherwise first band
if user_band_name == "auto":
for aliases in [['rgb', 'redgreenblue'], ['green', 'g'], ['blue', 'b']]:
for band in multi_camera:
if band['name'].lower() in aliases:
return band['name']
return multi_camera[0]['name']
for band in multi_camera:

Wyświetl plik

@ -85,7 +85,7 @@ def generate_kmz(orthophoto_file, output_file=None, outsize=None):
system.run('gdal_translate -of KMLSUPEROVERLAY -co FORMAT=PNG "%s" "%s" %s '
'--config GDAL_CACHEMAX %s%% ' % (orthophoto_file, output_file, bandparam, get_max_memory()))
def post_orthophoto_steps(args, bounds_file_path, orthophoto_file, orthophoto_tiles_dir):
def post_orthophoto_steps(args, bounds_file_path, orthophoto_file, orthophoto_tiles_dir, resolution):
if args.crop > 0 or args.boundary:
Cropper.crop(bounds_file_path, orthophoto_file, get_orthophoto_vars(args), keep_original=not args.optimize_disk_space, warp_options=['-dstalpha'])
@ -99,7 +99,7 @@ def post_orthophoto_steps(args, bounds_file_path, orthophoto_file, orthophoto_ti
generate_kmz(orthophoto_file)
if args.tiles:
generate_orthophoto_tiles(orthophoto_file, orthophoto_tiles_dir, args.max_concurrency)
generate_orthophoto_tiles(orthophoto_file, orthophoto_tiles_dir, args.max_concurrency, resolution)
if args.cog:
convert_to_cogeo(orthophoto_file, max_workers=args.max_concurrency, compression=args.orthophoto_compression)

Wyświetl plik

@ -13,7 +13,7 @@ from opendm import system
from opendm import context
from opendm import camera
from opendm import location
from opendm.photo import find_largest_photo_dim, find_largest_photo
from opendm.photo import find_largest_photo_dims, find_largest_photo
from opensfm.large import metadataset
from opensfm.large import tools
from opensfm.actions import undistort
@ -64,7 +64,6 @@ class OSFMContext:
"Check that the images have enough overlap, "
"that there are enough recognizable features "
"and that the images are in focus. "
"You could also try to increase the --min-num-features parameter."
"The program will now exit.")
if rolling_shutter_correct:
@ -211,11 +210,25 @@ class OSFMContext:
'lowest': 0.0675,
}
max_dim = find_largest_photo_dim(photos)
max_dims = find_largest_photo_dims(photos)
if max_dim > 0:
if max_dims is not None:
w, h = max_dims
max_dim = max(w, h)
log.ODM_INFO("Maximum photo dimensions: %spx" % str(max_dim))
feature_process_size = int(max_dim * feature_quality_scale[args.feature_quality])
lower_limit = 320
upper_limit = 4480
megapixels = (w * h) / 1e6
multiplier = 1
if megapixels < 2:
multiplier = 2
elif megapixels > 42:
multiplier = 0.5
factor = min(1, feature_quality_scale[args.feature_quality] * multiplier)
feature_process_size = min(upper_limit, max(lower_limit, int(max_dim * factor)))
log.ODM_INFO("Photo dimensions for feature extraction: %ipx" % feature_process_size)
else:
log.ODM_WARNING("Cannot compute max image dimensions, going with defaults")
@ -283,9 +296,8 @@ class OSFMContext:
config.append("matcher_type: %s" % osfm_matchers[matcher_type])
# GPU acceleration?
if has_gpu(args):
max_photo = find_largest_photo(photos)
w, h = max_photo.width, max_photo.height
if has_gpu(args) and max_dims is not None:
w, h = max_dims
if w > h:
h = int((h / w) * feature_process_size)
w = int(feature_process_size)
@ -559,6 +571,8 @@ class OSFMContext:
pdf_report.save_report("report.pdf")
if os.path.exists(osfm_report_path):
if os.path.exists(report_path):
os.unlink(report_path)
shutil.move(osfm_report_path, report_path)
else:
log.ODM_WARNING("Report could not be generated")
@ -779,3 +793,12 @@ def get_all_submodel_paths(submodels_path, *all_paths):
result.append([os.path.join(submodels_path, f, ap) for ap in all_paths])
return result
def is_submodel(opensfm_root):
# A bit hackish, but works without introducing additional markers / flags
# Look at the path of the opensfm directory and see if "submodel_" is part of it
parts = os.path.abspath(opensfm_root).split(os.path.sep)
return (len(parts) >= 2 and parts[-2][:9] == "submodel_") or \
os.path.isfile(os.path.join(opensfm_root, "split_merge_stop_at_reconstruction.txt")) or \
os.path.isfile(os.path.join(opensfm_root, "features", "empty"))

Wyświetl plik

@ -430,7 +430,8 @@ class ODM_Photo:
camera_projection = camera_projection.lower()
# Parrot Sequoia's "fisheye" model maps to "fisheye_opencv"
if camera_projection == "fisheye" and self.camera_make.lower() == "parrot" and self.camera_model.lower() == "sequoia":
# or better yet, replace all fisheye with fisheye_opencv, but wait to change API signature
if camera_projection == "fisheye":
camera_projection = "fisheye_opencv"
if camera_projection in projections:
@ -925,3 +926,6 @@ class ODM_Photo:
return self.width * self.height / 1e6
else:
return 0.0
def is_make_model(self, make, model):
return self.camera_make.lower() == make.lower() and self.camera_model.lower() == model.lower()

Wyświetl plik

@ -9,6 +9,8 @@ from opendm.concurrency import parallel_map
from opendm.utils import double_quote
from opendm.boundary import as_polygon, as_geojson
from opendm.dem.pdal import run_pipeline
from opendm.opc import classify
from opendm.dem import commands
def ply_info(input_ply):
if not os.path.exists(input_ply):
@ -274,6 +276,32 @@ def merge_ply(input_point_cloud_files, output_file, dims=None):
system.run(' '.join(cmd))
def post_point_cloud_steps(args, tree, rerun=False):
# Classify and rectify before generating derivate files
if args.pc_classify:
pc_classify_marker = os.path.join(tree.odm_georeferencing, 'pc_classify_done.txt')
if not io.file_exists(pc_classify_marker) or rerun:
log.ODM_INFO("Classifying {} using Simple Morphological Filter (1/2)".format(tree.odm_georeferencing_model_laz))
commands.classify(tree.odm_georeferencing_model_laz,
args.smrf_scalar,
args.smrf_slope,
args.smrf_threshold,
args.smrf_window
)
log.ODM_INFO("Classifying {} using OpenPointClass (2/2)".format(tree.odm_georeferencing_model_laz))
classify(tree.odm_georeferencing_model_laz, args.max_concurrency)
with open(pc_classify_marker, 'w') as f:
f.write('Classify: smrf\n')
f.write('Scalar: {}\n'.format(args.smrf_scalar))
f.write('Slope: {}\n'.format(args.smrf_slope))
f.write('Threshold: {}\n'.format(args.smrf_threshold))
f.write('Window: {}\n'.format(args.smrf_window))
if args.pc_rectify:
commands.rectify(tree.odm_georeferencing_model_laz)
# XYZ point cloud output
if args.pc_csv:
log.ODM_INFO("Creating CSV file (XYZ format)")

Wyświetl plik

@ -19,6 +19,7 @@ RS_DATABASE = {
'dji fc220': 64, # DJI Mavic Pro (Platinum)
'hasselblad l1d-20c': lambda p: 47 if p.get_capture_megapixels() < 17 else 56, # DJI Mavic 2 Pro (at 16:10 => 16.8MP 47ms, at 3:2 => 19.9MP 56ms. 4:3 has 17.7MP with same image height as 3:2 which can be concluded as same sensor readout)
'hasselblad l2d-20c': 16.6, # DJI Mavic 3 (not enterprise version)
'dji fc3582': lambda p: 26 if p.get_capture_megapixels() < 48 else 60, # DJI Mini 3 pro (at 48MP readout is 60ms, at 12MP it's 26ms)

Wyświetl plik

@ -35,12 +35,12 @@ def dn_to_temperature(photo, image, images_path):
# Every camera stores thermal information differently
# The following will work for MicaSense Altum cameras
# but not necessarily for others
if photo.camera_make == "MicaSense" and photo.camera_model == "Altum":
if photo.camera_make == "MicaSense" and photo.camera_model[:5] == "Altum":
image = image.astype("float32")
image -= (273.15 * 100.0) # Convert Kelvin to Celsius
image *= 0.01
return image
elif photo.camera_make == "DJI" and photo.camera_model == "ZH20T":
elif photo.camera_make == "DJI" and photo.camera_model == "ZH20T":
filename, file_extension = os.path.splitext(photo.filename)
# DJI H20T high gain mode supports measurement of -40~150 celsius degrees
if file_extension.lower() in [".tif", ".tiff"] and image.min() >= 23315: # Calibrated grayscale tif

Wyświetl plik

@ -1,16 +1,25 @@
import os
import sys
import math
from opendm import log
from opendm import system
from opendm import io
def generate_tiles(geotiff, output_dir, max_concurrency):
gdal2tiles = os.path.join(os.path.dirname(__file__), "gdal2tiles.py")
system.run('%s "%s" --processes %s -z 5-21 -n -w none "%s" "%s"' % (sys.executable, gdal2tiles, max_concurrency, geotiff, output_dir))
def generate_tiles(geotiff, output_dir, max_concurrency, resolution):
circumference_earth_cm = 2*math.pi*637_813_700
px_per_tile = 256
resolution_equator_cm = circumference_earth_cm/px_per_tile
zoom = math.ceil(math.log(resolution_equator_cm/resolution, 2))
def generate_orthophoto_tiles(geotiff, output_dir, max_concurrency):
min_zoom = 5 # 4.89 km/px
max_zoom = min(zoom, 23) # No deeper zoom than 23 (1.86 cm/px at equator)
gdal2tiles = os.path.join(os.path.dirname(__file__), "gdal2tiles.py")
system.run('%s "%s" --processes %s -z %s-%s -n -w none "%s" "%s"' % (sys.executable, gdal2tiles, max_concurrency, min_zoom, max_zoom, geotiff, output_dir))
def generate_orthophoto_tiles(geotiff, output_dir, max_concurrency, resolution):
try:
generate_tiles(geotiff, output_dir, max_concurrency)
generate_tiles(geotiff, output_dir, max_concurrency, resolution)
except Exception as e:
log.ODM_WARNING("Cannot generate orthophoto tiles: %s" % str(e))
@ -37,10 +46,10 @@ def generate_colored_hillshade(geotiff):
log.ODM_WARNING("Cannot generate colored hillshade: %s" % str(e))
return (None, None, None)
def generate_dem_tiles(geotiff, output_dir, max_concurrency):
def generate_dem_tiles(geotiff, output_dir, max_concurrency, resolution):
try:
colored_dem, hillshade_dem, colored_hillshade_dem = generate_colored_hillshade(geotiff)
generate_tiles(colored_hillshade_dem, output_dir, max_concurrency)
generate_tiles(colored_hillshade_dem, output_dir, max_concurrency, resolution)
# Cleanup
for f in [colored_dem, hillshade_dem, colored_hillshade_dem]:

Wyświetl plik

@ -13,6 +13,7 @@ from opendm import log
from opendm import io
from opendm import system
from opendm import context
from opendm import multispectral
from opendm.progress import progressbc
from opendm.photo import ODM_Photo
@ -27,7 +28,7 @@ class ODM_Reconstruction(object):
self.gcp = None
self.multi_camera = self.detect_multi_camera()
self.filter_photos()
def detect_multi_camera(self):
"""
Looks at the reconstruction photos and determines if this
@ -45,22 +46,88 @@ class ODM_Reconstruction(object):
band_photos[p.band_name].append(p)
bands_count = len(band_photos)
if bands_count >= 2 and bands_count <= 8:
# Band name with the minimum number of photos
max_band_name = None
max_photos = -1
for band_name in band_photos:
if len(band_photos[band_name]) > max_photos:
max_band_name = band_name
max_photos = len(band_photos[band_name])
if bands_count >= 2 and bands_count <= 10:
# Validate that all bands have the same number of images,
# otherwise this is not a multi-camera setup
img_per_band = len(band_photos[p.band_name])
for band in band_photos:
if len(band_photos[band]) != img_per_band:
log.ODM_ERROR("Multi-camera setup detected, but band \"%s\" (identified from \"%s\") has only %s images (instead of %s), perhaps images are missing or are corrupted. Please include all necessary files to process all bands and try again." % (band, band_photos[band][0].filename, len(band_photos[band]), img_per_band))
raise RuntimeError("Invalid multi-camera images")
img_per_band = len(band_photos[max_band_name])
mc = []
for band_name in band_indexes:
mc.append({'name': band_name, 'photos': band_photos[band_name]})
# Sort by band index
mc.sort(key=lambda x: band_indexes[x['name']])
filter_missing = False
for band in band_photos:
if len(band_photos[band]) < img_per_band:
log.ODM_WARNING("Multi-camera setup detected, but band \"%s\" (identified from \"%s\") has only %s images (instead of %s), perhaps images are missing or are corrupted." % (band, band_photos[band][0].filename, len(band_photos[band]), len(band_photos[max_band_name])))
filter_missing = True
if filter_missing:
# Calculate files to ignore
_, p2s = multispectral.compute_band_maps(mc, max_band_name)
max_files_per_band = 0
for filename in p2s:
max_files_per_band = max(max_files_per_band, len(p2s[filename]))
for filename in p2s:
if len(p2s[filename]) < max_files_per_band:
photos_to_remove = p2s[filename] + [p for p in self.photos if p.filename == filename]
for photo in photos_to_remove:
log.ODM_WARNING("Excluding %s" % photo.filename)
self.photos = [p for p in self.photos if p != photo]
for i in range(len(mc)):
mc[i]['photos'] = [p for p in mc[i]['photos'] if p != photo]
log.ODM_INFO("New image count: %s" % len(self.photos))
# We enforce a normalized band order for all bands that we can identify
# and rely on the manufacturer's band_indexes as a fallback for all others
normalized_band_order = {
'RGB': '0',
'REDGREENBLUE': '0',
'RED': '1',
'R': '1',
'GREEN': '2',
'G': '2',
'BLUE': '3',
'B': '3',
'NIR': '4',
'N': '4',
'REDEDGE': '5',
'RE': '5',
'PANCHRO': '6',
'LWIR': '7',
'L': '7',
}
for band_name in band_indexes:
if band_name.upper() not in normalized_band_order:
log.ODM_WARNING(f"Cannot identify order for {band_name} band, using manufacturer suggested index instead")
# Sort
mc.sort(key=lambda x: normalized_band_order.get(x['name'].upper(), '9' + band_indexes[x['name']]))
for c, d in enumerate(mc):
log.ODM_INFO(f"Band {c + 1}: {d['name']}")
return mc
return None
@ -82,6 +149,12 @@ class ODM_Reconstruction(object):
if 'rgb' in bands or 'redgreenblue' in bands:
if 'red' in bands and 'green' in bands and 'blue' in bands:
bands_to_remove.append(bands['rgb'] if 'rgb' in bands else bands['redgreenblue'])
# Mavic 3M's RGB camera lens are too different than the multispectral ones
# so we drop the RGB channel instead
elif self.photos[0].is_make_model("DJI", "M3M") and 'red' in bands and 'green' in bands:
bands_to_remove.append(bands['rgb'] if 'rgb' in bands else bands['redgreenblue'])
else:
for b in ['red', 'green', 'blue']:
if b in bands:

Wyświetl plik

@ -4,7 +4,7 @@ import json
from opendm import log
from opendm.photo import find_largest_photo_dims
from osgeo import gdal
from opendm.loghelpers import double_quote
from opendm.arghelpers import double_quote
class NumpyEncoder(json.JSONEncoder):
def default(self, obj):

Wyświetl plik

@ -54,8 +54,10 @@ class SrtFileParser:
if not self.gps_data:
for d in self.data:
lat, lon, alt = d.get('latitude'), d.get('longitude'), d.get('altitude')
if alt is None:
alt = 0
tm = d.get('start')
if lat is not None and lon is not None:
if self.ll_to_utm is None:
self.ll_to_utm, self.utm_to_ll = location.utm_transformers_from_ll(lon, lat)
@ -127,6 +129,20 @@ class SrtFileParser:
# 00:00:35,000 --> 00:00:36,000
# F/6.3, SS 60, ISO 100, EV 0, RTK (120.083799, 30.213635, 28), HOME (120.084146, 30.214243, 103.55m), D 75.36m, H 76.19m, H.S 0.30m/s, V.S 0.00m/s, F.PRY (-5.3°, 2.1°, 28.3°), G.PRY (-40.0°, 0.0°, 28.2°)
# DJI Unknown Model #1
# 1
# 00:00:00,000 --> 00:00:00,033
# <font size="28">SrtCnt : 1, DiffTime : 33ms
# 2024-01-18 10:23:26.397
# [iso : 150] [shutter : 1/5000.0] [fnum : 170] [ev : 0] [ct : 5023] [color_md : default] [focal_len : 240] [dzoom_ratio: 10000, delta:0],[latitude: -22.724555] [longitude: -47.602414] [rel_alt: 0.300 abs_alt: 549.679] </font>
# DJI Mavic 2 Zoom
# 1
# 00:00:00,000 --> 00:00:00,041
# <font size="36">FrameCnt : 1, DiffTime : 41ms
# 2023-07-15 11:55:16,320,933
# [iso : 100] [shutter : 1/400.0] [fnum : 280] [ev : 0] [ct : 5818] [color_md : default] [focal_len : 240] [latitude : 0.000000] [longtitude : 0.000000] [altitude: 0.000000] </font>
with open(self.filename, 'r') as f:
iso = None
@ -197,12 +213,14 @@ class SrtFileParser:
latitude = match_single([
("latitude: ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
("latitude : ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
("GPS \([\d\.\-]+,? ([\d\.\-]+),? [\d\.\-]+\)", lambda v: float(v) if v != 0 else None),
("RTK \([-+]?\d+\.\d+, (-?\d+\.\d+), -?\d+\)", lambda v: float(v) if v != 0 else None),
], line)
longitude = match_single([
("longitude: ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
("longtitude : ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
("GPS \(([\d\.\-]+),? [\d\.\-]+,? [\d\.\-]+\)", lambda v: float(v) if v != 0 else None),
("RTK \((-?\d+\.\d+), [-+]?\d+\.\d+, -?\d+\)", lambda v: float(v) if v != 0 else None),
], line)
@ -211,4 +229,5 @@ class SrtFileParser:
("altitude: ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
("GPS \([\d\.\-]+,? [\d\.\-]+,? ([\d\.\-]+)\)", lambda v: float(v) if v != 0 else None),
("RTK \([-+]?\d+\.\d+, [-+]?\d+\.\d+, (-?\d+)\)", lambda v: float(v) if v != 0 else None),
("abs_alt: ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
], line)

29
run.py
Wyświetl plik

@ -13,7 +13,7 @@ from opendm import system
from opendm import io
from opendm.progress import progressbc
from opendm.utils import get_processing_results_paths, rm_r
from opendm.loghelpers import args_to_dict
from opendm.arghelpers import args_to_dict, save_opts, compare_args, find_rerun_stage
from stages.odm_app import ODMApp
@ -29,20 +29,26 @@ if __name__ == '__main__':
log.ODM_INFO('Initializing ODM %s - %s' % (odm_version(), system.now()))
progressbc.set_project_name(args.name)
args.project_path = os.path.join(args.project_path, args.name)
if not io.dir_exists(args.project_path):
log.ODM_ERROR('Directory %s does not exist.' % args.name)
exit(1)
opts_json = os.path.join(args.project_path, "options.json")
auto_rerun_stage, opts_diff = find_rerun_stage(opts_json, args, config.rerun_stages, config.processopts)
if auto_rerun_stage is not None and len(auto_rerun_stage) > 0:
log.ODM_INFO("Rerunning from: %s" % auto_rerun_stage[0])
args.rerun_from = auto_rerun_stage
# Print args
args_dict = args_to_dict(args)
log.ODM_INFO('==============')
for k in args_dict.keys():
log.ODM_INFO('%s: %s' % (k, args_dict[k]))
log.ODM_INFO('%s: %s%s' % (k, args_dict[k], ' [changed]' if k in opts_diff else ''))
log.ODM_INFO('==============')
progressbc.set_project_name(args.name)
# Add project dir if doesn't exist
args.project_path = os.path.join(args.project_path, args.name)
if not io.dir_exists(args.project_path):
log.ODM_WARNING('Directory %s does not exist. Creating it now.' % args.name)
system.mkdir_p(os.path.abspath(args.project_path))
# If user asks to rerun everything, delete all of the existing progress directories.
if args.rerun_all:
@ -57,6 +63,9 @@ if __name__ == '__main__':
app = ODMApp(args)
retcode = app.execute()
if retcode == 0:
save_opts(opts_json, args)
# Do not show ASCII art for local submodels runs
if retcode == 0 and not "submodels" in args.project_path:

Wyświetl plik

@ -81,14 +81,11 @@ class ODMMvsTexStage(types.ODM_Stage):
# Format arguments to fit Mvs-Texturing app
skipGlobalSeamLeveling = ""
skipLocalSeamLeveling = ""
keepUnseenFaces = ""
nadir = ""
if args.texturing_skip_global_seam_leveling:
skipGlobalSeamLeveling = "--skip_global_seam_leveling"
if args.texturing_skip_local_seam_leveling:
skipLocalSeamLeveling = "--skip_local_seam_leveling"
if args.texturing_keep_unseen_faces:
keepUnseenFaces = "--keep_unseen_faces"
if (r['nadir']):
@ -102,7 +99,6 @@ class ODMMvsTexStage(types.ODM_Stage):
'dataTerm': 'gmi',
'outlierRemovalType': 'gauss_clamping',
'skipGlobalSeamLeveling': skipGlobalSeamLeveling,
'skipLocalSeamLeveling': skipLocalSeamLeveling,
'keepUnseenFaces': keepUnseenFaces,
'toneMapping': 'none',
'nadirMode': nadir,
@ -114,7 +110,7 @@ class ODMMvsTexStage(types.ODM_Stage):
mvs_tmp_dir = os.path.join(r['out_dir'], 'tmp')
# Make sure tmp directory is empty
# mvstex creates a tmp directory, so make sure it is empty
if io.dir_exists(mvs_tmp_dir):
log.ODM_INFO("Removing old tmp directory {}".format(mvs_tmp_dir))
shutil.rmtree(mvs_tmp_dir)
@ -125,7 +121,6 @@ class ODMMvsTexStage(types.ODM_Stage):
'-t {toneMapping} '
'{intermediate} '
'{skipGlobalSeamLeveling} '
'{skipLocalSeamLeveling} '
'{keepUnseenFaces} '
'{nadirMode} '
'{labelingFile} '

Wyświetl plik

@ -27,6 +27,7 @@ class ODMApp:
Initializes the application and defines the ODM application pipeline stages
"""
json_log_paths = [os.path.join(args.project_path, "log.json")]
if args.copy_to:
json_log_paths.append(args.copy_to)

Wyświetl plik

@ -12,7 +12,6 @@ from opendm.cropper import Cropper
from opendm import pseudogeo
from opendm.tiles.tiler import generate_dem_tiles
from opendm.cogeo import convert_to_cogeo
from opendm.opc import classify
class ODMDEMStage(types.ODM_Stage):
def process(self, args, outputs):
@ -35,7 +34,6 @@ class ODMDEMStage(types.ODM_Stage):
ignore_resolution=ignore_resolution and args.ignore_gsd,
has_gcp=reconstruction.has_gcp())
log.ODM_INFO('Classify: ' + str(args.pc_classify))
log.ODM_INFO('Create DSM: ' + str(args.dsm))
log.ODM_INFO('Create DTM: ' + str(args.dtm))
log.ODM_INFO('DEM input file {0} found: {1}'.format(dem_input, str(pc_model_found)))
@ -45,34 +43,9 @@ class ODMDEMStage(types.ODM_Stage):
if not io.dir_exists(odm_dem_root):
system.mkdir_p(odm_dem_root)
if args.pc_classify and pc_model_found:
pc_classify_marker = os.path.join(odm_dem_root, 'pc_classify_done.txt')
if not io.file_exists(pc_classify_marker) or self.rerun():
log.ODM_INFO("Classifying {} using Simple Morphological Filter (1/2)".format(dem_input))
commands.classify(dem_input,
args.smrf_scalar,
args.smrf_slope,
args.smrf_threshold,
args.smrf_window
)
log.ODM_INFO("Classifying {} using OpenPointClass (2/2)".format(dem_input))
classify(dem_input, args.max_concurrency)
with open(pc_classify_marker, 'w') as f:
f.write('Classify: smrf\n')
f.write('Scalar: {}\n'.format(args.smrf_scalar))
f.write('Slope: {}\n'.format(args.smrf_slope))
f.write('Threshold: {}\n'.format(args.smrf_threshold))
f.write('Window: {}\n'.format(args.smrf_window))
progress = 20
self.update_progress(progress)
if args.pc_rectify:
commands.rectify(dem_input)
# Do we need to process anything here?
if (args.dsm or args.dtm) and pc_model_found:
dsm_output_filename = os.path.join(odm_dem_root, 'dsm.tif')
@ -100,8 +73,8 @@ class ODMDEMStage(types.ODM_Stage):
resolution=resolution / 100.0,
decimation=args.dem_decimation,
max_workers=args.max_concurrency,
keep_unfilled_copy=args.dem_euclidean_map,
max_tiles=math.ceil(len(reconstruction.photos) / 2)
with_euclidean_map=args.dem_euclidean_map,
max_tiles=None if reconstruction.has_geotagged_photos() else math.ceil(len(reconstruction.photos) / 2)
)
dem_geotiff_path = os.path.join(odm_dem_root, "{}.tif".format(product))
@ -111,27 +84,16 @@ class ODMDEMStage(types.ODM_Stage):
# Crop DEM
Cropper.crop(bounds_file_path, dem_geotiff_path, utils.get_dem_vars(args), keep_original=not args.optimize_disk_space)
if args.dem_euclidean_map:
unfilled_dem_path = io.related_file_path(dem_geotiff_path, postfix=".unfilled")
if args.crop > 0 or args.boundary:
# Crop unfilled DEM
Cropper.crop(bounds_file_path, unfilled_dem_path, utils.get_dem_vars(args), keep_original=not args.optimize_disk_space)
commands.compute_euclidean_map(unfilled_dem_path,
io.related_file_path(dem_geotiff_path, postfix=".euclideand"),
overwrite=True)
if pseudo_georeference:
pseudogeo.add_pseudo_georeferencing(dem_geotiff_path)
if args.tiles:
generate_dem_tiles(dem_geotiff_path, tree.path("%s_tiles" % product), args.max_concurrency)
generate_dem_tiles(dem_geotiff_path, tree.path("%s_tiles" % product), args.max_concurrency, resolution)
if args.cog:
convert_to_cogeo(dem_geotiff_path, max_workers=args.max_concurrency)
progress += 30
progress += 40
self.update_progress(progress)
else:
log.ODM_WARNING('Found existing outputs in: %s' % odm_dem_root)

Wyświetl plik

@ -60,7 +60,7 @@ class ODMeshingStage(types.ODM_Stage):
available_cores=args.max_concurrency,
method='poisson' if args.fast_orthophoto else 'gridded',
smooth_dsm=True,
max_tiles=math.ceil(len(reconstruction.photos) / 2))
max_tiles=None if reconstruction.has_geotagged_photos() else math.ceil(len(reconstruction.photos) / 2))
else:
log.ODM_WARNING('Found a valid ODM 2.5D Mesh file in: %s' %
tree.odm_25dmesh)

Wyświetl plik

@ -7,6 +7,7 @@ from opendm import context
from opendm import types
from opendm import gsd
from opendm import orthophoto
from opendm.osfm import is_submodel
from opendm.concurrency import get_max_memory_mb
from opendm.cutline import compute_cutline
from opendm.utils import double_quote
@ -28,10 +29,10 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
if not io.file_exists(tree.odm_orthophoto_tif) or self.rerun():
resolution = 1.0 / (gsd.cap_resolution(args.orthophoto_resolution, tree.opensfm_reconstruction,
ignore_gsd=args.ignore_gsd,
ignore_resolution=(not reconstruction.is_georeferenced()) and args.ignore_gsd,
has_gcp=reconstruction.has_gcp()) / 100.0)
resolution = gsd.cap_resolution(args.orthophoto_resolution, tree.opensfm_reconstruction,
ignore_gsd=args.ignore_gsd,
ignore_resolution=(not reconstruction.is_georeferenced()) and args.ignore_gsd,
has_gcp=reconstruction.has_gcp())
# odm_orthophoto definitions
kwargs = {
@ -39,7 +40,7 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
'log': tree.odm_orthophoto_log,
'ortho': tree.odm_orthophoto_render,
'corners': tree.odm_orthophoto_corners,
'res': resolution,
'res': 1.0 / (resolution/100.0),
'bands': '',
'depth_idx': '',
'inpaint': '',
@ -114,6 +115,7 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
# Cutline computation, before cropping
# We want to use the full orthophoto, not the cropped one.
submodel_run = is_submodel(tree.opensfm)
if args.orthophoto_cutline:
cutline_file = os.path.join(tree.odm_orthophoto, "cutline.gpkg")
@ -122,15 +124,18 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
cutline_file,
args.max_concurrency,
scale=0.25)
if submodel_run:
orthophoto.compute_mask_raster(tree.odm_orthophoto_tif, cutline_file,
os.path.join(tree.odm_orthophoto, "odm_orthophoto_cut.tif"),
blend_distance=20, only_max_coords_feature=True)
else:
log.ODM_INFO("Not a submodel run, skipping mask raster generation")
orthophoto.compute_mask_raster(tree.odm_orthophoto_tif, cutline_file,
os.path.join(tree.odm_orthophoto, "odm_orthophoto_cut.tif"),
blend_distance=20, only_max_coords_feature=True)
orthophoto.post_orthophoto_steps(args, bounds_file_path, tree.odm_orthophoto_tif, tree.orthophoto_tiles)
orthophoto.post_orthophoto_steps(args, bounds_file_path, tree.odm_orthophoto_tif, tree.orthophoto_tiles, resolution)
# Generate feathered orthophoto also
if args.orthophoto_cutline:
if args.orthophoto_cutline and submodel_run:
orthophoto.feather_raster(tree.odm_orthophoto_tif,
os.path.join(tree.odm_orthophoto, "odm_orthophoto_feathered.tif"),
blend_distance=20

Wyświetl plik

@ -65,7 +65,7 @@ class ODMOpenMVSStage(types.ODM_Stage):
filter_point_th = -20
config = [
" --resolution-level %s" % int(resolution_level),
"--resolution-level %s" % int(resolution_level),
'--dense-config-file "%s"' % densify_ini_file,
"--max-resolution %s" % int(outputs['undist_image_max_size']),
"--max-threads %s" % args.max_concurrency,
@ -79,7 +79,6 @@ class ODMOpenMVSStage(types.ODM_Stage):
gpu_config = []
use_gpu = has_gpu(args)
if use_gpu:
#gpu_config.append("--cuda-device -3")
gpu_config.append("--cuda-device -1")
else:
gpu_config.append("--cuda-device -2")
@ -95,12 +94,13 @@ class ODMOpenMVSStage(types.ODM_Stage):
extra_config.append("--ignore-mask-label 0")
with open(densify_ini_file, 'w+') as f:
f.write("Optimize = 7\n")
f.write("Optimize = 7\nMin Views Filter = 1\n")
def run_densify():
system.run('"%s" "%s" %s' % (context.omvs_densify_path,
openmvs_scene_file,
' '.join(config + gpu_config + extra_config)))
try:
run_densify()
except system.SubprocessException as e:
@ -110,7 +110,7 @@ class ODMOpenMVSStage(types.ODM_Stage):
log.ODM_WARNING("OpenMVS failed with GPU, is your graphics card driver up to date? Falling back to CPU.")
gpu_config = ["--cuda-device -2"]
run_densify()
elif (e.errorCode == 137 or e.errorCode == 3221226505) and not pc_tile:
elif (e.errorCode == 137 or e.errorCode == 143 or e.errorCode == 3221226505) and not pc_tile:
log.ODM_WARNING("OpenMVS ran out of memory, we're going to turn on tiling to see if we can process this.")
pc_tile = True
config.append("--fusion-mode 1")
@ -127,10 +127,10 @@ class ODMOpenMVSStage(types.ODM_Stage):
subscene_densify_ini_file = os.path.join(tree.openmvs, 'subscene-config.ini')
with open(subscene_densify_ini_file, 'w+') as f:
f.write("Optimize = 0\n")
f.write("Optimize = 0\nEstimation Geometric Iters = 0\nMin Views Filter = 1\n")
config = [
"--sub-scene-area 660000",
"--sub-scene-area 660000", # 8000
"--max-threads %s" % args.max_concurrency,
'-w "%s"' % depthmaps_dir,
"-v 0",
@ -161,9 +161,13 @@ class ODMOpenMVSStage(types.ODM_Stage):
config = [
'--resolution-level %s' % int(resolution_level),
'--max-resolution %s' % int(outputs['undist_image_max_size']),
"--sub-resolution-levels %s" % subres_levels,
'--dense-config-file "%s"' % subscene_densify_ini_file,
'--number-views-fuse %s' % number_views_fuse,
'--max-threads %s' % args.max_concurrency,
'--archive-type 3',
'--postprocess-dmaps 0',
'--geometric-iters 0',
'-w "%s"' % depthmaps_dir,
'-v 0',
]
@ -179,7 +183,7 @@ class ODMOpenMVSStage(types.ODM_Stage):
else:
# Filter
if args.pc_filter > 0:
system.run('"%s" "%s" --filter-point-cloud %s -v 0 %s' % (context.omvs_densify_path, scene_dense_mvs, filter_point_th, ' '.join(gpu_config)))
system.run('"%s" "%s" --filter-point-cloud %s -v 0 --archive-type 3 %s' % (context.omvs_densify_path, scene_dense_mvs, filter_point_th, ' '.join(gpu_config)))
else:
# Just rename
log.ODM_INFO("Skipped filtering, %s --> %s" % (scene_ply_unfiltered, scene_ply))
@ -219,7 +223,7 @@ class ODMOpenMVSStage(types.ODM_Stage):
try:
system.run('"%s" %s' % (context.omvs_densify_path, ' '.join(config + gpu_config + extra_config)))
except system.SubprocessException as e:
if e.errorCode == 137 or e.errorCode == 3221226505:
if e.errorCode == 137 or e.errorCode == 143 or e.errorCode == 3221226505:
log.ODM_WARNING("OpenMVS filtering ran out of memory, visibility checks will be skipped.")
skip_filtering()
else:

Wyświetl plik

@ -266,7 +266,7 @@ class ODMMergeStage(types.ODM_Stage):
orthophoto_vars = orthophoto.get_orthophoto_vars(args)
orthophoto.merge(all_orthos_and_ortho_cuts, tree.odm_orthophoto_tif, orthophoto_vars)
orthophoto.post_orthophoto_steps(args, merged_bounds_file, tree.odm_orthophoto_tif, tree.orthophoto_tiles)
orthophoto.post_orthophoto_steps(args, merged_bounds_file, tree.odm_orthophoto_tif, tree.orthophoto_tiles, args.orthophoto_resolution)
elif len(all_orthos_and_ortho_cuts) == 1:
# Simply copy
log.ODM_WARNING("A single orthophoto/cutline pair was found between all submodels.")
@ -306,7 +306,7 @@ class ODMMergeStage(types.ODM_Stage):
log.ODM_INFO("Created %s" % dem_file)
if args.tiles:
generate_dem_tiles(dem_file, tree.path("%s_tiles" % human_name.lower()), args.max_concurrency)
generate_dem_tiles(dem_file, tree.path("%s_tiles" % human_name.lower()), args.max_concurrency, args.dem_resolution)
if args.cog:
convert_to_cogeo(dem_file, max_workers=args.max_concurrency)