BUILD_EPT
Generates an Entwine Point Tile (EPT) dataset from a LAS/LAZ point cloud. The step uses Entwine to reorganize the input point cloud into a hierarchical octree tileset optimized for streaming, web visualization, and spatial queries. The resulting dataset is compatible with Potree, Cesium, and other modern point cloud viewers.
For large datasets, the workflow automatically estimates the total point count and divides the build process into multiple subsets to avoid excessive memory usage. These subsets are built independently and then merged into a final EPT tileset. This allows the step to process very large point clouds reliably.
Typical use: preparing LiDAR datasets for web visualization or scalable cloud distribution.
Contract
| Type | BUILD_EPT |
| Accepts | input_las: las |
| Produces | output_ept: ept |
| Params | none |
Inputs
| Slot | Type | Description |
|---|---|---|
input_las | las | Source point cloud dataset |
Outputs
| Slot | Type | Description |
|---|---|---|
output_ept | ept | Entwine Point Tile dataset directory |
The EPT output is a directory containing ept.json, ept-data/, and ept-hierarchy/.
What it does internally
The workflow performs three intelligent operations:
1. Estimate point count
Runs pdal info input.las to determine dataset size.
2. Calculate safe parallelization
Based on the point count, the workflow computes:
- Points per GiB of available RAM
- Required subset count
- Subset size rounded up to the nearest power of 4
It then decides whether to run entwine build (single pass) or entwine build -s <id> (per subset). This avoids RAM exhaustion on large datasets.
3. Merge subsets (if needed)
If the dataset was split into subsets, runs entwine merge to produce a single unified EPT tileset.
This means the step can safely handle very large point clouds automatically, without any configuration changes.
Recipe usage
json
{
"id": "build_ept",
"type": "BUILD_EPT",
"inputs": { "input_las": "job:input_las" },
"outputs": { "output_ept": "step:build_ept.output_ept" }
}Artifact storage path
artifacts/job_{id}/build_ept/The entire EPT directory is uploaded.
Memory management details
See memory-management.md for the full explanation of the subset sizing algorithm.
Related executors
BUILD_COPC— COPC format, better for streaming/analyticsBUILD_POTREE— Potree tileset format, better for Potree-based viewers