Track 03 / LiDAR & 3D
Oriented 3D cuboids in point-cloud space, LiDAR + camera sensor fusion, 3D semantic segmentation, multi-frame tracking. Reviewers trained on autonomous-driving and robotics schemas.
Oriented cuboids · car / truck / van / bus / pedestrian
Techniques
LiDAR labelling isn’t a variant of 2D — it’s a different craft. Dedicated reviewer pools, AV/robotics schemas, multi-sensor environments.
Class, heading, size, and position in point-cloud coordinates. Tight-to-object fit in cluttered sweeps, consistent heading convention across a batch, and occlusion handling for partially-scanned objects.
Co-registered labels across sensor modalities. A car ID in the LiDAR sweep is the same ID in the camera frame, and in the radar return. Reviewers trained to resolve cross-sensor disagreements, not pick a favourite.
Dense per-point semantic labels — ground, drivable, static, dynamic, vegetation, building. For perception and mapping workflows where object-level boxes leave scene context on the table.
Object identity preserved across LiDAR sweeps with keyframe + interpolation workflow. Re-entry and occlusion resolution rules are schema-defined, not left to operator instinct.
Lane polylines, drivable-area polygons, ego-trajectory labels for HD-mapping pipelines and prediction models. Runs on aggregated sweeps or bird’s-eye-view projections.
Where we’ve delivered
3D data labelling is a different craft. The reviewer pool is trained on AV / robotics schemas and sensor stacks — not swapped in from 2D mid-batch.
Dashcam + LiDAR sweep datasets for L2–L4 perception. Multi-sensor fusion and multi-frame tracking.
Highway and distribution-yard datasets. Prioritised for edge cases: construction, night, adverse weather.
Warehouse, logistics, and last-mile robotics. Structured indoor environments with fixed taxonomy.
Aggregated point-cloud labelling for lane, sign, and infrastructure extraction in HD-map workflows.
Schema specifics
3D schemas fail in different places than 2D. These surfaces are most often worth tightening before a LiDAR pilot runs clean.
Front-face axis convention — the single most common drift point on cuboid batches.
How far to extrapolate a box when only part of an object returned points.
Separate classes for drivable surface and generic ground — or collapsed.
Which sensor is the temporal source of truth, and tolerance for cross-sensor drift.
Per-object dynamic/static flag for motion prediction and mapping workflows.
Rules for when a re-appearing object keeps its ID vs. takes a new one.
Questions we get
Your AV labeling stack of choice — 3D-capable platforms including Scale, Labelbox, Encord, CVAT 3D, or internal tools. We train operators to the tool, not vice versa.
Fusion-first. One operator carries identity across LiDAR, camera, and radar for a given object. Cross-sensor disagreements route to QA arbitration rather than getting silently resolved.
Both. Single-sweep cuboids for perception training, aggregated point clouds for HD-mapping and drivable-area workflows.
Secured, access-audited environment. Specific envelope — on-prem, VPC, or air-gapped — confirmed per engagement at scope.
Written per engagement. A cuboid IoU or per-class agreement threshold against a co-drafted gold reference. Miss it, rework is on us.
Other annotation tracks
LiDAR & 3D sits alongside two other tracks under the same QA discipline and the same operating model.
Scope with us
We scope operators, schema work, and a written cuboid-accuracy threshold together — and run a pilot against a co-drafted gold set before you commit to steady-state volume.