Track 01 / Computer Vision

Labels the perception stack actually trusts.

Bounding boxes, polygons, keypoints, segmentation, video tracking — the highest-volume track. Operators specialised by domain: autonomy, retail analytics, medical imaging, agri-tech.

ISO 27001 Certified · Cert No. 452AGI102121
Sample · Bounding Box ● LIVE Bounding box annotation on a street scene showing pedestrians and vehicles labelled with tight 2D boxes.
Techniques
SixBox, polygon, segmentation, keypoint, tracking, cuboid
Throughput
HighHighest-volume track
Specialisations
FourAV, retail, medical, agri
Tooling
YoursCVAT, Labelbox, V7, Encord, and more

Where we’ve delivered

Domain-specialised operators. Not generalists.

CV operators are pooled by domain — the same reviewer who can label medical lesions isn’t also labelling AV dashcam. Domain trains judgment.

Domain 01

Autonomy & ADAS

Dashcam, surround, sensor-fused datasets for perception training. Night / rain / edge-case prioritised.

Box · segmentation · tracking
Domain 02

Retail & shelf

Planogram compliance, product recognition, stock-out detection. High SKU density, long-tail classes.

Polygon · box · count
Domain 03

Medical imaging

Lesion, organ, and anatomical landmark annotation under reviewer oversight. PHI-safe environment.

Polygon · segmentation · keypoint
Domain 04

Agri & geospatial

Crop row, weed, pest, and yield-estimation labels from drone and satellite imagery. Multi-season.

Polygon · segmentation

Schema specifics

What lives in a CV schema we adopt.

Your schema is the source of truth. These are the schema surfaces that most often need tightening before a pilot runs clean.

01 / Class hierarchy

Parent / child taxonomy.

Multi-level class trees with inheritance. Resolved before calibration, versioned through the engagement.

02 / Occlusion

Visibility bands, not yes/no.

Tri-state or percent bands rather than binary flags — carries more useful signal downstream.

03 / Crowd handling

Per-instance vs. group label.

Rules for when a crowd becomes a single "crowd" region rather than N instances.

04 / Truncation

Frame-edge conventions.

How to label objects cut by the frame — extend-the-box, clip-to-frame, or skip.

05 / Polygon vertices

Density per object.

Minimum vertex density tied to object type. Pedestrians ≠ trucks ≠ medical lesions.

06 / Tracking IDs

Re-entry rules.

When a disappearing-then-reappearing object keeps its ID vs. takes a new one.

Questions we get

CV annotation, answered plainly.

Q 01

Do you work inside our labeling tool, or yours?

Yours. We train operators to the platform you already run — CVAT, Label Studio, V7, Labelbox, Encord, Scale, SuperAnnotate, Roboflow, or an internal tool. No migration required.

Q 02

What’s the pilot-to-production timeline?

Typically two to four weeks from scope to steady-state, depending on schema complexity. Pilot runs against a co-drafted gold set; schema tightens before scaling headcount.

Q 03

How do you handle edge cases not covered by the schema?

Route-to-client, not guess-and-log. Ambiguous frames surface to your data lead with a proposed resolution and are not silently labelled wrong.

Q 04

Can you operate in a secured / air-gapped environment?

Yes, for medical, defense, and regulated financial datasets. Operators work inside the secured environment with audited access. Specific envelope confirmed at scope.

Q 05

What’s your accuracy commitment?

Written per engagement. An IAA threshold against a gold reference co-drafted with your team — miss it and rework is on us.

Other annotation tracks

Running a multi-modal dataset?

Computer vision sits alongside two other tracks under the same QA discipline and the same operating model.

Scope with us

Send us a CV batch. We’ll return a pilot plan.

We scope operators, schema work, and a written accuracy threshold together — and run a pilot against a co-drafted gold set before you commit to steady-state volume.