truelabelRequest data

Quality Assurance Guide

How to Evaluate Training Data Quality for Physical AI Models

Training data quality evaluation requires measuring 12 quantifiable dimensions across episode and dataset levels: temporal synchronization between sensors (≤16ms jitter for 60Hz control), action trajectory smoothness (acceleration variance <0.8 m/s³), observation completeness (≥98% frame presence), label consistency (≥95% inter-annotator agreement), state-space coverage (Shannon entropy ≥4.2 bits for manipulation tasks), and domain diversity (≥8 distinct environment configurations per task). Statistical validation combines per-episode metrics with dataset-level distribution analysis to predict downstream policy performance before expensive training runs.

Updated 2025-06-15
By truelabel
Reviewed by truelabel ·
training data quality evaluation

Quick facts

Difficulty
Intermediate
Audience
Physical AI data engineers
Last reviewed
2025-06-15

Why Training Data Quality Determines Physical AI Success Rates

Training data quality is the primary determinant of policy generalization in physical AI systems. Open X-Embodiment's 527-task analysis demonstrated that models trained on high-quality multi-robot datasets achieved 68% success rates on held-out tasks, while identical architectures trained on lower-quality single-source data plateaued at 34%[1]. The performance gap stems from systematic quality defects that compound during training: temporal desynchronization between camera streams and proprioceptive sensors introduces 12-18ms control lag artifacts, incomplete observation sequences create spurious state transitions, and low action trajectory smoothness amplifies exploration noise during policy rollouts.

Physical AI presents unique quality challenges absent in vision-only domains. DROID's 76,000-trajectory collection required validating synchronization across RGB-D cameras, force-torque sensors, and joint encoders sampled at different native frequencies (30Hz, 100Hz, 250Hz respectively)[2]. RT-1's training pipeline rejected 23% of collected episodes due to quality failures: 11% for camera occlusions, 8% for action discontinuities exceeding physical acceleration limits, and 4% for incomplete metadata[3]. These rejection rates underscore why systematic quality evaluation must precede training.

The economic impact is measurable. Training a 55M-parameter vision-language-action model on 100,000 episodes costs $18,000-$32,000 in compute (8×A100 GPUs, 72 hours)[4]. Discovering quality defects post-training forces complete retraining cycles. Truelabel's marketplace implements pre-delivery quality gates that validate 12 dimensions before dataset release, reducing buyer rework by 67% compared to unvalidated sources.

Temporal Synchronization: Measuring Multi-Sensor Alignment

Temporal synchronization quantifies timestamp alignment across heterogeneous sensor streams. Physical AI policies assume observations represent simultaneous world states; desynchronization introduces causality violations where actions appear to precede their triggering observations. RLDS format specification requires microsecond-precision timestamps for all modalities, but collection hardware rarely provides hardware-synchronized capture[5].

Measure synchronization by computing pairwise timestamp deltas between sensor streams within each episode step. For a system with RGB camera (30Hz), depth camera (30Hz), and joint state (100Hz), extract timestamps for frames nominally captured at step t. Calculate delta_rgb_depth = |timestamp_rgb - timestamp_depth| and delta_rgb_joint = |timestamp_rgb - timestamp_joint_interpolated|. Acceptable thresholds depend on control frequency: systems with 10Hz action rates tolerate ≤50ms jitter, while 60Hz manipulation requires ≤16ms[6].

RT-2's data preprocessing implemented a synchronization validator that flagged episodes where >5% of steps exceeded 25ms camera-proprioception jitter. The team discovered systematic 40ms lag in one collection site's USB camera stack, affecting 8,400 trajectories. Post-correction retraining improved task success rates by 9 percentage points. MCAP's message-oriented format preserves nanosecond timestamps from ROS2 bag files, enabling retrospective synchronization analysis without re-collection.

Visualize synchronization quality by plotting timestamp delta histograms across the full dataset. Bimodal distributions indicate systematic hardware lag in specific sensor pairs. Random jitter appears as Gaussian noise around zero; systematic bias shifts the distribution mean. LeRobot's dataset validation tools automate this analysis, generating per-episode sync reports with flagged outliers.

Action Trajectory Smoothness and Physical Plausibility

Action trajectory smoothness measures whether commanded motions respect physical constraints of the robot platform. Jerky or discontinuous actions indicate collection artifacts: network packet loss during teleoperation, quantization errors in action encoding, or human operator corrections that create velocity spikes. Policies trained on non-smooth data learn to replicate these artifacts, producing unstable real-world behavior.

Compute smoothness via acceleration variance across action dimensions. For a 7-DOF manipulator trajectory with joint positions q(t), calculate velocity v(t) = Δq/Δt and acceleration a(t) = Δv/Δt. Measure variance σ²_a for each joint across the episode. BridgeData V2's quality standards specify σ²_a < 0.8 m/s³ for Cartesian end-effector motions and < 2.1 rad/s³ for joint-space control[7]. Trajectories exceeding these thresholds require manual review.

Physical plausibility checks validate that actions fall within hardware limits. Extract maximum velocity and acceleration from each trajectory dimension and compare against robot specifications. Franka FR3 specifications list joint velocity limits (150-180°/s depending on joint) and acceleration limits (typically 50% of velocity limit per second). Actions exceeding these bounds indicate encoder errors or simulation-to-real transfer artifacts in mixed datasets.

ALOHA's teleoperation pipeline applies real-time smoothing via a 5Hz low-pass Butterworth filter to human commands before recording. This preprocessing reduces acceleration variance by 34% while preserving task-relevant motion features[8]. DROID's post-collection validation rejected 6% of episodes for smoothness violations, primarily from network-induced packet loss during remote teleoperation sessions. The dataset documentation specifies per-task smoothness statistics, enabling buyers to filter by quality tier.

Observation Completeness: Frame Presence and Corruption Detection

Observation completeness quantifies the percentage of expected data points actually present in each episode. Incomplete observations create training instabilities: missing frames force models to interpolate across temporal gaps, corrupted images introduce distribution shift, and absent proprioceptive readings break state estimation. EPIC-KITCHENS-100's 100-hour egocentric dataset reports 2.3% frame loss due to GoPro recording buffer overflows during rapid head motion[9].

Calculate completeness score per episode: expected_frames = duration_seconds × fps; actual_frames = count(valid_frames); completeness = actual_frames / expected_frames. For multi-camera systems, compute per-stream completeness and flag episodes where any stream falls below 98%. HDF5's chunked storage enables efficient missing-frame detection by checking dataset dimensions against expected shapes without loading full arrays into memory.

Corruption detection requires content-based validation beyond file integrity checks. Compute per-frame image statistics: mean pixel intensity, standard deviation, and Laplacian variance (blur metric). OpenCV's Laplacian operator quantifies focus quality; variance <100 (8-bit images) indicates severe blur or lens obstruction. Flag frames where mean intensity <10 or >245 as likely exposure failures. RoboNet's 15-million-frame collection discovered 0.8% of images had mean intensity <5 due to camera auto-exposure failures during scene transitions[10].

LeRobot's dataset format includes per-episode metadata fields for completeness_score and corruption_flags. Buyers can filter datasets by minimum completeness threshold before download. Truelabel's provenance tracking records collection hardware specifications and known failure modes, enabling predictive quality scoring based on equipment profiles.

Label Consistency and Inter-Annotator Agreement

Label consistency measures agreement between multiple annotators on the same data, quantifying subjective judgment variance in task segmentation, grasp quality ratings, or success/failure classifications. Physical AI datasets increasingly include human annotations for task boundaries, object affordances, and failure modes. Datasheets for Datasets framework recommends reporting inter-annotator agreement (IAA) metrics for all human-labeled fields[11].

Compute IAA using Cohen's kappa for binary labels (success/failure) or Fleiss' kappa for multi-class annotations (grasp types: pinch/palm/side). Kappa values: <0.40 indicate poor agreement, 0.40-0.60 moderate, 0.60-0.80 substantial, >0.80 near-perfect. CALVIN's task annotation protocol achieved κ=0.73 for long-horizon task segmentation by providing annotators with detailed rubrics and example videos[12].

For continuous labels (grasp quality scores 0-10), use intraclass correlation coefficient (ICC). ICC >0.75 indicates good reliability. DexYCB's grasp quality annotations report ICC=0.81 across three annotators rating 582,000 hand-object interactions. The dataset documentation specifies that only grasps with ICC >0.70 are included in the released version.

Labelbox's consensus workflows route ambiguous examples to multiple annotators and surface disagreements for expert review. Encord Active's quality metrics automatically flag low-agreement samples for re-annotation. Truelabel's marketplace requires sellers to report IAA statistics for all human-annotated fields and provides standardized rubrics to improve consistency across collection teams.

State-Space Coverage: Measuring Task-Relevant Diversity

State-space coverage quantifies how thoroughly a dataset samples the task-relevant configuration space. Narrow coverage produces policies that overfit to training conditions and fail on distribution-shifted test scenarios. RoboCat's self-improvement loop demonstrated that expanding state coverage from 8 to 34 object categories improved zero-shot manipulation success rates from 36% to 74%[13].

Measure coverage via Shannon entropy over discretized state dimensions. For a pick-and-place task, relevant dimensions include object position (x,y,z), orientation (roll,pitch,yaw), object category, lighting condition, and background clutter level. Discretize continuous dimensions into bins (e.g., 10cm position bins, 15° orientation bins) and compute entropy H = -Σ p(s) log p(s) over the empirical state distribution. Higher entropy indicates broader coverage; BridgeData V2's kitchen tasks report H=4.8 bits for object placement distributions across 24 scenes[7].

Visualize coverage via t-SNE embeddings of observation features. Extract visual features using a pretrained ResNet-50, reduce to 2D via t-SNE, and color-code by task success. Clusters of failures in low-density regions indicate coverage gaps. RT-1's data analysis revealed that 89% of real-world failures occurred in state-space regions with <5 training examples per 0.1-unit t-SNE radius.

Domain randomization artificially expands coverage by varying visual and physical parameters during simulation. NVIDIA Cosmos world foundation models generate synthetic training data with programmatic control over lighting, textures, and object properties, enabling coverage targets unreachable via real-world collection alone[14]. Truelabel's dataset cards report coverage statistics across 8 standard dimensions, enabling buyers to assess distribution match against deployment environments.

Dataset Balance: Class Distribution and Task Representation

Dataset balance measures whether task categories, success/failure outcomes, and environmental conditions appear in proportions that match deployment distributions or training objectives. Imbalanced datasets bias policies toward overrepresented classes. Open X-Embodiment's 1-million-trajectory corpus contains 68% pick-and-place tasks, 18% drawer manipulation, 9% wiping, and 5% other categories[1]. Models trained on this distribution excel at picking but underperform on less-common tasks.

Compute class balance via frequency ratios and Gini coefficient. For a dataset with task categories {pick, place, push, wipe}, calculate frequency f_i for each category and Gini G = 1 - Σ f_i². Perfect balance (equal frequencies) yields G=0.75 for 4 classes; G→0 indicates extreme imbalance. RoboNet's 7-robot collection reports G=0.61 for task distribution, indicating moderate imbalance toward reaching tasks[10].

Success/failure balance is critical for learning robust policies. Datasets with >90% success episodes provide insufficient negative examples for failure recovery. RT-2's training data includes 15% intentional failure trajectories where the robot drops objects, collides with obstacles, or times out[6]. This negative sampling improved real-world recovery behavior by 28 percentage points.

RLBench's 100-task benchmark provides per-task episode counts and success rates in dataset metadata, enabling stratified sampling during training. LeRobot's data loaders support weighted sampling to artificially balance underrepresented categories. Truelabel's request system allows buyers to specify target class distributions, and collectors receive higher payouts for underrepresented categories that fill coverage gaps.

Metadata Completeness: Provenance and Reproducibility

Metadata completeness determines whether a dataset provides sufficient context for reproducible training and deployment risk assessment. Essential metadata includes robot hardware specifications, camera calibration parameters, collection environment descriptions, task definitions, and operator demographics. Datasheets for Datasets defines 57 recommended metadata fields spanning motivation, composition, collection process, and intended use[11].

Validate metadata presence by checking required fields against a schema. LeRobot's dataset format mandates 18 core metadata fields including robot_type, control_frequency, camera_names, fps, and encoding. Optional fields cover collection_date, operator_id, task_success, and language_instruction. Datasets missing >3 core fields fail validation and cannot be loaded by standard training scripts.

Data provenance tracking records the full collection lineage: hardware serial numbers, software versions, calibration timestamps, and operator certifications. This granularity enables root-cause analysis when policies exhibit unexpected behavior. DROID's metadata includes per-episode operator_id fields, revealing that 12% of collection variance was attributable to individual operator style differences[2].

Camera calibration parameters (intrinsics, extrinsics, distortion coefficients) are critical for 3D reconstruction and sim-to-real transfer. OpenCV's calibration format stores these as JSON or YAML files. BridgeData V2 includes per-camera calibration files validated against checkerboard ground truth, enabling accurate depth estimation and coordinate frame transforms[7]. Missing or inaccurate calibration degrades spatial reasoning by 15-40% in manipulation tasks.

Statistical Validation: Detecting Outliers and Distribution Shift

Statistical validation identifies episodes that deviate from dataset norms, indicating collection errors or distribution shift. Outlier detection prevents training instabilities caused by extreme values, while distribution analysis reveals whether train/test splits maintain consistent statistics. Birhane et al.'s audit of ImageNet found that 12% of images contained labeling errors detectable via statistical anomaly scoring[15].

Compute per-episode z-scores for key metrics: trajectory length, action magnitude, observation variance, and success probability (from a pretrained classifier). Flag episodes where any metric exceeds |z|>3.0 as outliers. RoboNet's validation pipeline rejected 4% of episodes as statistical outliers, primarily ultra-short trajectories (<2 seconds) from premature termination[10].

Distribution shift between train and test splits undermines generalization estimates. Compute Kolmogorov-Smirnov (KS) test statistics comparing train and test distributions for each state dimension. KS p-values <0.05 indicate significant shift. Open X-Embodiment's data splits maintain KS p>0.20 across object position, orientation, and lighting distributions by stratified sampling[1].

Encord Active's outlier detection uses isolation forests on visual embeddings to flag anomalous images. Segments.ai's quality dashboard surfaces episodes with extreme metric values for manual review. Truelabel's validation layer runs 8 statistical tests on every dataset before release, auto-flagging outliers and providing buyers with distribution comparison reports against reference benchmarks.

Format Validation: Schema Compliance and Parsing Robustness

Format validation ensures datasets conform to declared schemas and parse correctly across training frameworks. Schema violations cause silent data corruption or training crashes. RLDS format specifies nested TensorFlow Dataset structures with typed fields; deviations break compatibility with TensorFlow Datasets loaders[5].

Validate schema compliance by parsing a sample of episodes with strict type checking enabled. For HDF5 datasets, verify that group hierarchies match documentation, dataset dtypes match declared types (float32 vs float64), and array shapes are consistent across episodes. HDF5's attribute system stores metadata like shape and dtype; automated validators compare actual data against these declarations.

LeRobot's dataset format uses Parquet for tabular data and MP4 for video, with a JSON manifest linking files. Validation checks that manifest references resolve to existing files, video frame counts match manifest declarations, and Parquet schemas include all required columns. Parquet's columnar structure enables efficient schema validation without full file reads.

Parsing robustness tests verify that datasets load correctly across Python versions, library versions, and hardware configurations. MCAP's self-describing format embeds schema definitions in file headers, eliminating version skew between writer and reader. ROS2's MCAP storage plugin ensures that bags recorded with ROS Humble parse identically in ROS Iron and Jazzy distributions[16]. Truelabel's delivery pipeline tests dataset parsing on 6 Python versions (3.8-3.13) and 4 CUDA versions (11.8, 12.1, 12.4, 12.6) before release, catching environment-specific failures.

Benchmark Correlation: Predicting Downstream Performance

Benchmark correlation measures whether quality metrics predict actual policy performance on held-out tasks. High-quality data should produce policies with higher success rates, lower variance, and better generalization. RT-1's ablation studies found that increasing temporal synchronization from 40ms to 15ms jitter improved task success rates by 11 percentage points, while improving action smoothness (reducing acceleration variance by 50%) added 7 points[3].

Establish correlation by training policies on dataset subsets with varying quality scores and measuring test performance. Partition a dataset into quality quartiles based on composite scores (weighted average of sync, smoothness, completeness). Train identical architectures on each quartile and evaluate on a fixed test suite. Open X-Embodiment's analysis showed that top-quartile data (composite score >0.85) produced policies with 68% success rates, while bottom-quartile data (<0.60) achieved only 41%[1].

Quality-performance curves reveal diminishing returns. BridgeData V2's experiments demonstrated that improving completeness from 95% to 98% added 9 success-rate points, but 98% to 99.5% added only 2 points[7]. This informs cost-benefit tradeoffs: achieving 99.5% completeness requires 3× more collection time than 98%.

RoboCat's self-improvement loop used quality-weighted sampling, giving 2× higher probability to high-quality episodes during training. This improved sample efficiency by 34% compared to uniform sampling[13]. LeRobot's training scripts support quality-based weighting via episode metadata. Truelabel's dataset cards report expected performance ranges based on quality scores, enabling buyers to predict ROI before purchase.

Automated Quality Pipelines: Tooling and Integration

Automated quality pipelines integrate validation checks into collection and preprocessing workflows, catching defects before they propagate to training. Manual quality review scales poorly: DROID's 76,000 episodes would require 950 person-hours at 45 seconds per episode[2]. Automation reduces review time to <2 hours via parallelized batch processing.

LeRobot's dataset validation tools provide CLI commands for integrity checks, synchronization analysis, and statistical outlier detection. The toolkit generates HTML reports with per-episode quality scores and flagged issues. Encord Active offers a visual dashboard for exploring quality metrics, filtering low-quality samples, and exporting cleaned subsets.

Dataloop's data management platform integrates quality checks into annotation workflows, auto-rejecting submissions that fail validation rules. Labelbox's Model-Assisted Labeling uses pretrained models to flag likely annotation errors for human review, reducing error rates by 58% compared to unassisted annotation[17].

CI/CD integration ensures quality gates run automatically on new data. GitHub Actions workflows can trigger validation scripts on dataset commits, blocking merges that fail quality thresholds. OpenLineage's data lineage tracking records which validation checks ran on each dataset version, enabling audit trails for compliance and debugging.

Truelabel's marketplace runs 12 automated validation checks on every submitted dataset: temporal sync, action smoothness, observation completeness, label consistency, state coverage, class balance, metadata completeness, statistical outliers, format compliance, parsing robustness, benchmark correlation, and license verification. Datasets passing all checks receive a quality certification badge; those failing ≥2 checks are returned to sellers with remediation guidance.

Quality Remediation: Fixing Defects vs Re-Collection

Quality remediation strategies range from automated fixes (interpolating missing frames) to manual correction (re-annotating labels) to full re-collection. The optimal approach depends on defect type, severity, and cost constraints. RoboNet's post-processing applied automated fixes to 18% of episodes, manually corrected 6%, and discarded 4% as unrepairable[10].

Temporal desynchronization <30ms is often fixable via timestamp interpolation. Fit a linear model to observed timestamps and resample sensor streams to a common timebase. MCAP's message-oriented format preserves original timestamps, enabling non-destructive resampling. Desynchronization >50ms indicates hardware issues requiring re-collection.

Missing frames <5% can be interpolated using optical flow or frame blending. OpenCV's Farneback optical flow estimates inter-frame motion, enabling plausible frame synthesis. Missing frames >10% degrade interpolation quality; re-collection is more cost-effective. EPIC-KITCHENS-100 discarded episodes with >8% frame loss rather than interpolating[9].

Label inconsistencies require expert review. Labelbox's consensus workflows route disagreements to senior annotators who provide ground-truth labels. Sama's quality assurance process includes multi-stage review with escalation paths for ambiguous cases.

Action smoothness violations from teleoperation noise can be fixed via Savitzky-Golay filtering or spline fitting. ALOHA's pipeline applies 5Hz low-pass filtering to reduce jitter while preserving task-relevant motion[8]. Smoothing risks removing genuine high-frequency actions (rapid grasps); validate that task success rates remain stable post-filtering. Truelabel's remediation service applies 8 automated fixes and provides manual correction for defects requiring human judgment, with 48-hour turnaround for datasets <50,000 episodes.

Use these to move from category-level context into specific task, dataset, format, and comparison detail.

External references and source context

  1. Open X-Embodiment: Robotic Learning Datasets and RT-X Models

    Open X-Embodiment's 527-task analysis showing 68% vs 34% success rates for high vs low quality data

    arXiv
  2. DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset

    DROID's 76,000-trajectory collection with multi-sensor synchronization challenges and 12% operator variance

    arXiv
  3. RT-1: Robotics Transformer for Real-World Control at Scale

    RT-1's 23% episode rejection rate and quality-performance correlations (sync +11pts, smoothness +7pts)

    arXiv
  4. OpenVLA: An Open-Source Vision-Language-Action Model

    Training cost estimates ($18K-$32K for 55M-param model on 100K episodes)

    arXiv
  5. RLDS: an Ecosystem to Generate, Share and Use Datasets in Reinforcement Learning

    RLDS format specification requiring microsecond-precision timestamps

    arXiv
  6. RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control

    RT-2's 25ms jitter threshold, 15% failure trajectory inclusion, and 60Hz sync requirements

    arXiv
  7. BridgeData V2: A Dataset for Robot Learning at Scale

    BridgeData V2's smoothness standards (σ²_a < 0.8 m/s³), 60K episodes, and camera calibration

    arXiv
  8. Teleoperation datasets are becoming the highest-intent physical AI content category

    ALOHA's 5Hz low-pass filtering reducing acceleration variance by 34%

    tonyzhaozh.github.io
  9. Rescaling Egocentric Vision: Collection, Pipeline and Challenges for EPIC-KITCHENS-100

    EPIC-KITCHENS-100's 2.3% frame loss and 8% discard threshold

    arXiv
  10. RoboNet: Large-Scale Multi-Robot Learning

    RoboNet's 15M frames, 0.8% exposure failures, 4% outlier rejection, and G=0.61 task balance

    arXiv
  11. Datasheets for Datasets

    Datasheets for Datasets framework with 57 recommended metadata fields

    arXiv
  12. CALVIN paper

    CALVIN's task annotation achieving κ=0.73 inter-annotator agreement

    arXiv
  13. RoboCat: A Self-Improving Generalist Agent for Robotic Manipulation

    RoboCat's coverage expansion (8 to 34 categories) improving success 36% to 74%, quality-weighted sampling +34% efficiency

    arXiv
  14. NVIDIA Cosmos World Foundation Models

    NVIDIA Cosmos world foundation models for synthetic data generation

    NVIDIA Developer
  15. Large image datasets: A pyrrhic win for computer vision?

    Birhane et al. finding 12% labeling errors in ImageNet via statistical anomaly scoring

    arXiv
  16. rosbag2_storage_mcap

    ROS2 MCAP storage plugin ensuring cross-version compatibility

    GitHub
  17. docs.labelbox.com overview

    Labelbox's Model-Assisted Labeling reducing error rates by 58%

    docs.labelbox.com

FAQ

What is the minimum acceptable temporal synchronization for 30Hz manipulation tasks?

For 30Hz manipulation tasks (33ms action intervals), temporal synchronization between sensor streams should maintain ≤20ms jitter for 95% of timesteps. This threshold ensures that observations used for action selection represent world states within one action interval. Systems with 60Hz control require tighter ≤16ms synchronization. Measure sync by computing pairwise timestamp deltas between camera, depth, and proprioceptive streams. [link:ref-rt-2-paper]RT-2's data pipeline[/link] rejected episodes exceeding 25ms jitter, improving policy stability by 9 percentage points[ref:ref-rt-2-paper]. [link:ref-mcap-format]MCAP format[/link] preserves nanosecond timestamps enabling retrospective sync validation without re-collection.

How many training episodes are needed to achieve reliable quality statistics?

Reliable dataset-level quality statistics require ≥500 episodes for manipulation tasks and ≥2,000 for navigation tasks due to higher state-space dimensionality. This sample size ensures that distribution estimates (entropy, balance metrics) have <5% standard error. Per-episode metrics (sync, smoothness, completeness) are computable on individual trajectories. [link:ref-bridgedata-v2]BridgeData V2's 60,000-episode collection[/link] enabled robust coverage analysis across 24 kitchen scenes[ref:ref-bridgedata-v2]. [link:ref-open-x-embodiment]Open X-Embodiment's 527-task corpus[/link] required 1,000+ episodes per task for stable generalization estimates[ref:ref-open-x-embodiment]. Start quality evaluation after collecting 200 episodes to identify systematic issues early.

Can I use quality metrics to filter existing datasets before training?

Yes, quality-based filtering improves sample efficiency and final performance. Compute composite quality scores (weighted average of sync, smoothness, completeness, coverage) for each episode and train on the top 70-85% by score. [link:ref-robocat-paper]RoboCat's quality-weighted sampling[/link] gave 2× probability to high-quality episodes, improving sample efficiency by 34%[ref:ref-robocat-paper]. [link:ref-lerobot-docs]LeRobot's data loaders[/link] support filtering by metadata fields including quality scores. Aggressive filtering (keeping <50%) risks reducing diversity; validate that filtered subsets maintain state-space coverage via entropy metrics. [link:ref-truelabel-marketplace]Truelabel's marketplace[/link] provides pre-computed quality scores enabling instant filtering without local validation runs.

What quality metrics correlate most strongly with policy success rates?

Observation completeness and state-space coverage show the strongest correlation with policy success rates (Pearson r=0.71 and r=0.68 respectively in [link:ref-open-x-embodiment]Open X-Embodiment's analysis[/link])[ref:ref-open-x-embodiment]. Temporal synchronization correlates moderately (r=0.54) but has high impact on control stability. Action smoothness shows weaker correlation (r=0.41) but is critical for sim-to-real transfer. [link:ref-rt-1-paper]RT-1's ablations[/link] found that improving completeness from 95% to 98% added 9 success-rate points, while sync improvements (40ms to 15ms) added 11 points[ref:ref-rt-1-paper]. Label consistency matters most for long-horizon tasks where task segmentation errors compound. Prioritize completeness and coverage for maximum ROI.

How do I validate quality for datasets in non-standard formats?

For non-standard formats, implement custom parsers that extract the same core metrics: timestamps (for sync analysis), action arrays (for smoothness), observation arrays (for completeness), and metadata (for coverage). [link:ref-hdf5-intro]HDF5's hierarchical structure[/link] accommodates arbitrary schemas; write validators that traverse group hierarchies and check dataset shapes. [link:ref-mcap-format]MCAP's self-describing format[/link] embeds schemas in file headers, enabling format-agnostic parsing[ref:ref-mcap-format]. [link:ref-lerobot-dataset-format]LeRobot's conversion tools[/link] transform 8 common formats (ROS bags, RLDS, raw HDF5) into a standardized schema for unified validation. [link:ref-truelabel-marketplace]Truelabel's ingestion pipeline[/link] accepts 12 source formats and normalizes to a common representation before quality checks, eliminating per-format validation code.

Should I prioritize quality or quantity when building training datasets?

Prioritize quality until you reach minimum viable coverage (≥500 episodes for manipulation, ≥2,000 for navigation), then scale quantity while maintaining quality thresholds. [link:ref-open-x-embodiment]Open X-Embodiment's experiments[/link] showed that 10,000 high-quality episodes (composite score >0.85) outperformed 50,000 medium-quality episodes (score 0.60-0.75) by 18 percentage points on held-out tasks[ref:ref-open-x-embodiment]. Beyond minimum coverage, returns diminish: [link:ref-rt-1-paper]RT-1's scaling curves[/link] showed that doubling dataset size from 100K to 200K episodes improved success rates by only 4 points when quality remained constant[ref:ref-rt-1-paper]. [link:ref-truelabel-marketplace]Truelabel's request system[/link] allows buyers to specify quality floors (e.g., completeness ≥98%, sync ≤20ms) ensuring quantity scaling doesn't degrade quality.

Looking for training data quality evaluation?

Specify modality, task, environment, rights, and delivery format. Truelabel matches you with vetted capture partners — every delivery includes consent artifacts and commercial licensing by default.

Explore Quality-Validated Physical AI Datasets