Track 03 / LiDAR & 3D

Perception-grade labels for AV and robotics.

Oriented 3D cuboids in point-cloud space, LiDAR + camera sensor fusion, 3D semantic segmentation, multi-frame tracking. Reviewers trained on autonomous-driving and robotics schemas.

ISO 27001 Certified · Cert No. 452AGI102121
3D Viewer · Perspective ● LIVE LiDAR point-cloud scene with oriented 3D cuboids around cars, trucks, a van, a bus and pedestrians, with a coordinate axis legend. Oriented cuboids · car / truck / van / bus / pedestrian
Techniques
FiveCuboid, fusion, 3D-seg, tracking, lane/ego
Sensors
MultiLiDAR · camera · radar fusion
Verticals
TwoAV & robotics
Frame rate
HighMulti-frame tracking with keyframes

Where we’ve delivered

AV and robotics. Not a generalist pool.

3D data labelling is a different craft. The reviewer pool is trained on AV / robotics schemas and sensor stacks — not swapped in from 2D mid-batch.

Domain 01

Autonomous driving

Dashcam + LiDAR sweep datasets for L2–L4 perception. Multi-sensor fusion and multi-frame tracking.

Cuboid · fusion · tracking
Domain 02

ADAS & trucking

Highway and distribution-yard datasets. Prioritised for edge cases: construction, night, adverse weather.

Cuboid · 3D-seg · lane
Domain 03

Robotics & AGV

Warehouse, logistics, and last-mile robotics. Structured indoor environments with fixed taxonomy.

Cuboid · 3D-seg
Domain 04

HD mapping

Aggregated point-cloud labelling for lane, sign, and infrastructure extraction in HD-map workflows.

3D-seg · lane · polyline

Schema specifics

What lives in a 3D schema we adopt.

3D schemas fail in different places than 2D. These surfaces are most often worth tightening before a LiDAR pilot runs clean.

01 / Heading convention

Which way is forward.

Front-face axis convention — the single most common drift point on cuboid batches.

02 / Partial scans

Object extrapolation.

How far to extrapolate a box when only part of an object returned points.

03 / Ground class

Drivable vs. ground.

Separate classes for drivable surface and generic ground — or collapsed.

04 / Sensor sync

LiDAR ↔ camera timestamps.

Which sensor is the temporal source of truth, and tolerance for cross-sensor drift.

05 / Dynamic vs. static

Motion tagged.

Per-object dynamic/static flag for motion prediction and mapping workflows.

06 / Re-entry IDs

Same object, later frame.

Rules for when a re-appearing object keeps its ID vs. takes a new one.

Questions we get

LiDAR annotation, answered plainly.

Q 01

Which labeling tools do you work inside?

Your AV labeling stack of choice — 3D-capable platforms including Scale, Labelbox, Encord, CVAT 3D, or internal tools. We train operators to the tool, not vice versa.

Q 02

How do you handle multi-sensor datasets?

Fusion-first. One operator carries identity across LiDAR, camera, and radar for a given object. Cross-sensor disagreements route to QA arbitration rather than getting silently resolved.

Q 03

Do you label aggregated sweeps or single frames?

Both. Single-sweep cuboids for perception training, aggregated point clouds for HD-mapping and drivable-area workflows.

Q 04

What’s the environment for regulated AV data?

Secured, access-audited environment. Specific envelope — on-prem, VPC, or air-gapped — confirmed per engagement at scope.

Q 05

What accuracy commitment do you offer on 3D?

Written per engagement. A cuboid IoU or per-class agreement threshold against a co-drafted gold reference. Miss it, rework is on us.

Other annotation tracks

3D is one modality. Your stack is probably multi-modal.

LiDAR & 3D sits alongside two other tracks under the same QA discipline and the same operating model.

Scope with us

Send us a sweep. We’ll return a pilot plan.

We scope operators, schema work, and a written cuboid-accuracy threshold together — and run a pilot against a co-drafted gold set before you commit to steady-state volume.