truelabel

DATASET CATALOG

The physical AI dataset catalog built for buyers, not just researchers.

Compare robotics, egocentric, teleoperation, manipulation, simulation, and physical-world datasets by modality, task, format, license, commercial use clarity, and consent risk.

DIRECT ANSWER

truelabel’s dataset catalog helps physical AI teams decide whether public data is enough, where commercial use or consent risk is unclear, and what custom bounty data is still required for deployment. The curated catalog is paired with a broader Hugging Face robotics watchlist covering 1,000 source records.

Deep buyer analysis

Curated profiles

36 source-backed dataset pages with rights, consent, QA, and deployment-fit notes.

Broad source index

Hugging Face watchlist

1,000 robotics-tagged Hub records for discovery, prioritization, and future profile promotion.

Decision pages

Comparisons

26 head-to-head dataset comparison pages with buyer decision matrices.

CATALOG

36 source-backed dataset profiles

Commercial use
License
Modality
Task
Robot
Format

36 of 36 datasets

Open X-Embodiment

Commercial use unclear · Multi-institution robot demonstration corpus; exact per-task scale varies by contributing dataset.

A large cross-institution collection of robot demonstrations spanning many embodiments and manipulation tasks.

  • RGB-D
  • Proprioception
  • Robot Grasping

DROID

Commercial use unclear · Large real-world manipulation corpus; check source for current release counts.

A real-world robot manipulation dataset focused on diverse teleoperated demonstrations outside narrow lab-only settings.

  • Teleoperation
  • RGB-D
  • Robot Grasping

BridgeData V2

Commercial use unclear · Robot manipulation demonstrations across multiple tasks; source release describes exact split.

A robot manipulation dataset from Berkeley focused on real-world behavior cloning and task generalization.

  • RGB-D
  • Proprioception
  • Robot Grasping

RT-1

Commercial use unclear · Large language-conditioned robot demonstrations described in the source paper and project materials.

A robotics transformer data release associated with language-conditioned robot manipulation research.

  • RGB-D
  • Proprioception
  • Household Manipulation

ALOHA

Commercial use unclear · Task-specific demonstrations released around the ALOHA platform and follow-on projects.

A low-cost bimanual teleoperation platform and dataset family used for imitation learning in dexterous manipulation.

  • Teleoperation
  • RGB-D
  • Bimanual Manipulation

RoboMimic

Source appears permissive; verify data terms · Benchmark datasets and demonstration formats vary by task suite.

A benchmark and dataset framework for robot imitation learning with standardized tasks and evaluation utilities.

  • Proprioception
  • RGB-D
  • Robot Grasping

RoboNet

Commercial use unclear · Multi-robot manipulation dataset; source materials specify exact robot/task counts.

A multi-robot dataset for visual foresight and manipulation policy research.

  • RGB-D
  • Proprioception
  • Robot Grasping

Meta-World

Source appears permissive; verify data terms · Benchmark suite of simulated manipulation tasks.

A simulated manipulation benchmark for multi-task and meta-reinforcement learning.

  • Proprioception
  • Robot Grasping

RLBench

Commercial use unclear · Large simulated task suite; source materials define current task count.

A simulated robot learning benchmark with many manipulation tasks in CoppeliaSim.

  • RGB-D
  • Proprioception
  • Robot Grasping

CALVIN

Commercial use unclear · Long-horizon simulated benchmark and demonstrations.

A benchmark for language-conditioned long-horizon robot manipulation in simulated environments.

  • RGB-D
  • Proprioception
  • Household Manipulation

ManiSkill

Source appears permissive; verify data terms · Simulation suite with tasks and environments maintained by the ManiSkill project.

A simulation benchmark and toolkit for manipulation skills and embodied AI policy evaluation.

  • RGB-D
  • Proprioception
  • Robot Grasping

Ego4D

Commercial use restricted · Large-scale first-person video corpus with annotations; source controls exact access terms.

A large-scale egocentric video dataset focused on first-person human activity understanding.

  • Egocentric video
  • Motion capture
  • Human Object Interaction

EPIC-KITCHENS

Commercial use restricted · Large egocentric kitchen activity dataset with action annotations.

An egocentric video dataset of kitchen activities used for action recognition and human-object interaction research.

  • Egocentric video
  • Human Object Interaction

HOI4D

Commercial use unclear · Human-object interaction dataset with 4D annotations; exact release details live on the source site.

A 4D egocentric human-object interaction dataset with RGB-D and pose-oriented annotations.

  • Egocentric video
  • RGB-D
  • Human Object Interaction

DexYCB

Commercial use unclear · Dexterous grasping dataset with YCB objects and pose labels.

A dexterous hand-object interaction dataset centered on grasping YCB objects with 3D annotations.

  • RGB-D
  • Motion capture
  • Human Object Interaction

ScanNet

Commercial use restricted · Large indoor RGB-D scene reconstruction corpus.

An indoor RGB-D reconstruction dataset used for 3D scene understanding.

  • RGB-D
  • Point cloud
  • Navigation

Habitat datasets

Commercial use unclear · Multiple scene and task datasets under the AI Habitat ecosystem.

A family of embodied AI datasets and simulation assets for navigation and rearrangement research.

  • RGB-D
  • Point cloud
  • Navigation

AI2-THOR

Source appears permissive; verify data terms · Interactive household simulation scenes and tasks.

An interactive simulated environment for embodied AI agents in household-like scenes.

  • RGB-D
  • Navigation

BEHAVIOR

Commercial use unclear · Household activity benchmark and simulation assets.

A benchmark for household activities and embodied AI tasks in simulation.

  • RGB-D
  • Proprioception
  • Household Manipulation

Something-Something V2

Commercial use restricted · Large human-object action video dataset; access and current terms are controlled by the dataset host.

A human action video dataset focused on object interactions and temporal reasoning.

  • Third-person video
  • Human Object Interaction

Waymo Open Dataset

Commercial use restricted · Large autonomous driving scenes with cameras and LiDAR.

A large autonomous driving dataset with camera, LiDAR, and labeled traffic scenes.

  • RGB-D
  • Point cloud
  • Navigation

ObjectFolder

Commercial use unclear · Object-centric multimodal assets; source materials define current object count and modalities.

A dataset family for object-centric physical properties, geometry, and multimodal perception research.

  • RGB-D
  • Tactile
  • Robot Grasping

BC-Z

Commercial use unclear · Task demonstrations and model references associated with the BC-Z project.

A behavior cloning project focused on zero-shot task generalization for robots.

  • RGB-D
  • Proprioception
  • Robot Grasping

DexMV

Commercial use unclear · Dexterous manipulation demonstrations and visual observations; verify source for release details.

A dexterous manipulation dataset focused on multi-view visual observations and hand-object interaction.

  • RGB-D
  • Motion capture
  • Human Object Interaction

Kinetics

Commercial use restricted · Large action recognition video corpus maintained through public benchmark releases.

A large video action recognition dataset used widely for video model pretraining.

  • Third-person video
  • Human Object Interaction

RoboSuite

Source appears permissive; verify data terms · Simulation tasks and assets for manipulation research.

A simulation framework and benchmark suite for robot manipulation tasks.

  • RGB-D
  • Proprioception
  • Robot Grasping

RH20T

Commercial use unclear · Source describes more than 110,000 contact-rich manipulation sequences with visual, force, audio, action, and human demonstration signals.

A real-world contact-rich robot manipulation dataset with multimodal sensing, force, audio, and human demonstration video.

  • Teleoperation
  • RGB-D
  • Robot Grasping

AgiBot World

Commercial use unclear · Hugging Face organization page describes the Beta release as 1M+ trajectories and 2,976.4 hours across 217 tasks, 87 skills, 3,000+ objects, and 100+ real-world scenarios.

A large-scale real-world robot manipulation dataset family for fine-grained manipulation, tool use, and multi-robot collaboration.

  • Teleoperation
  • RGB-D
  • Household Manipulation

RoboCasa

Commercial use unclear · RoboCasa365 source materials describe 365 everyday tasks, 2,500 kitchen environments, 600+ hours of human demonstration data, and 1,600+ hours of synthetic demonstrations.

A large-scale kitchen simulation framework and dataset family for everyday manipulation tasks in diverse household environments.

  • RGB-D
  • Proprioception
  • Household Manipulation

LIBERO

Commercial use unclear · Benchmark datasets are organized around multiple LIBERO task suites, including spatial, object, goal, and long-horizon manipulation variants.

A benchmark suite for lifelong robot learning and language-conditioned manipulation tasks.

  • RGB-D
  • Proprioception
  • Robot Grasping

RoboSet

Commercial use unclear · Source describes 30,050 trajectories, including 9,500 collected through teleoperation, across 12 skills and 38 tasks with four camera views.

A real-world multi-task kitchen manipulation dataset with teleoperated and kinesthetic demonstrations.

  • Teleoperation
  • RGB-D
  • Household Manipulation

RoboTurk

Commercial use unclear · Project materials describe over 100 hours of real robot data and thousands of successful manipulation demonstrations collected through remote users.

A large-scale teleoperation data collection platform and dataset family for robot manipulation tasks.

  • Teleoperation
  • RGB-D
  • Robot Grasping

UMI

Commercial use unclear · Project materials emphasize portable in-the-wild data collection and fast demonstrations for tasks such as cup manipulation, dish washing, cloth folding, and dynamic tossing.

Universal Manipulation Interface is an in-the-wild human demonstration framework for transferring portable gripper data to robot policies.

  • Egocentric video
  • Teleoperation
  • Bimanual Manipulation

FurnitureBench

Commercial use unclear · Documentation describes 219.6 hours and 5,100 successful furniture assembly demonstrations collected with controller and keyboard inputs.

A real-world long-horizon furniture assembly benchmark with successful demonstration data.

  • Teleoperation
  • RGB-D
  • Furniture Assembly

TACO Play

Commercial use unclear · TensorFlow Datasets documentation lists TACO Play as Franka kitchen interaction data with train and test splits and a 47.77 GiB dataset size.

A kitchen robot manipulation dataset with Franka arm interaction data available through TensorFlow Datasets.

  • RGB-D
  • Proprioception
  • Household Manipulation

LeRobot datasets

Commercial use unclear · LeRobot documentation describes a standardized dataset ecosystem on Hugging Face Hub using Parquet for tabular data and MP4 for video observations.

A Hugging Face robotics dataset ecosystem and standardized dataset format for multimodal robot learning data.

  • Teleoperation
  • RGB-D
  • Robot Grasping

BROWSE BY

Pick a facet

MODALITIES

Direct links — modality

TASKS

Direct links — task

COMPARISONS

Dataset comparisons for buyer decisions

robot foundation model pretraining and real-world manipulation evaluation

Open X-Embodiment vs DROID

Use Open X-Embodiment for broad cross-robot pretraining; use DROID when real-world manipulation diversity is the deciding factor.

egocentric perception pretraining before custom robotics data collection

Ego4D vs EPIC-KITCHENS

Use Ego4D for broad egocentric activity coverage; use EPIC-KITCHENS for kitchen-specific hand-object action understanding.

choosing a manipulation benchmark before collecting real-world data

RoboMimic vs Meta-World

Use RoboMimic for imitation-learning trajectory workflows; use Meta-World for multi-task simulated manipulation benchmarks.

real-world manipulation pretraining before buyer-specific eval collection

DROID vs BridgeData V2

Use DROID when in-the-wild manipulation diversity matters; use BridgeData V2 for Berkeley-style behavior-cloning baselines and task generalization references.

choosing between general manipulation and bimanual teleoperation data

DROID vs ALOHA

Use DROID for broader single-arm real-world manipulation; use ALOHA when bimanual teleoperation and coordinated two-arm tasks are the deciding factor.

deciding whether a team needs real captured trajectories or benchmark-style imitation data

DROID vs RoboMimic

Use DROID for real-world manipulation diversity; use RoboMimic for controlled imitation-learning benchmarks and repeatable evaluation workflows.

foundation-model pretraining versus narrower behavior-cloning baselines

Open X-Embodiment vs BridgeData V2

Use Open X-Embodiment for broad cross-embodiment coverage; use BridgeData V2 when the buyer wants a narrower manipulation baseline with clearer task-family focus.

cross-embodiment pretraining versus bimanual task specialization

Open X-Embodiment vs ALOHA

Use Open X-Embodiment for cross-robot breadth; use ALOHA when bimanual teleoperation and low-cost dual-arm demonstrations are central to the model objective.

simulation benchmark selection before commissioning real-world validation data

RLBench vs ManiSkill

Use RLBench for language-conditioned simulated manipulation tasks; use ManiSkill for broader manipulation skill benchmarking and synthetic policy evaluation.

simulated manipulation benchmark selection for robotics teams

RLBench vs RoboSuite

Use RLBench for task-rich simulated manipulation; use RoboSuite for controller experiments and standardized robot-manipulation environments.

matching benchmark data to policy-learning and instruction-following objectives

Meta-World vs CALVIN

Use Meta-World for multi-task reinforcement-learning benchmarks; use CALVIN for long-horizon, language-conditioned manipulation evaluation.

household robot task planning before real home-data collection

CALVIN vs BEHAVIOR

Use CALVIN for long-horizon manipulation in a focused simulated setup; use BEHAVIOR when household task taxonomies and embodied AI planning coverage matter more.

egocentric perception and human-object interaction pretraining

Ego4D vs HOI4D

Use Ego4D for broad egocentric activity coverage; use HOI4D when geometry-aware human-object interaction and RGB-D context are more important.

first-person household perception before robot-aligned capture

EPIC-KITCHENS vs HOI4D

Use EPIC-KITCHENS for kitchen-specific action understanding; use HOI4D when 4D hand-object geometry and broader object interaction are required.

dexterous perception and geometry-aware interaction modeling

DexYCB vs HOI4D

Use DexYCB for dexterous hand-object pose and YCB object grasping; use HOI4D for broader egocentric human-object interaction with RGB-D context.

indoor scene understanding versus simulated navigation benchmark selection

ScanNet vs Habitat datasets

Use ScanNet for real indoor RGB-D scene reconstruction; use Habitat datasets for simulated embodied navigation and rearrangement tasks.

household simulation selection before collecting consented real-world data

AI2-THOR vs BEHAVIOR

Use AI2-THOR for interactive household scenes and embodied agent experiments; use BEHAVIOR for broader household activity task specification.

object property modeling versus dexterous grasp perception

ObjectFolder vs DexYCB

Use ObjectFolder for object-centric geometry, material, and tactile references; use DexYCB for hand-object interaction and pose-labeled grasping examples.

choosing between contact-rich multimodal manipulation and broad real-world manipulation pretraining

RH20T vs DROID

Use RH20T when contact-rich multimodal sensing matters; use DROID when broader in-the-wild manipulation diversity and distributed collection are the deciding factors.

large-scale robot corpus evaluation before buyer-specific embodiment validation

AgiBot World vs Open X-Embodiment

Use AgiBot World to study high-volume robot-data-factory releases; use Open X-Embodiment for cross-institution generalist policy pretraining references.

simulation benchmark selection for household manipulation and VLA evaluation

RoboCasa vs LIBERO

Use RoboCasa for large-scale kitchen simulation and household scene diversity; use LIBERO for structured lifelong-learning and language-conditioned manipulation benchmarks.

real-world kitchen manipulation data selection and ingestion planning

RoboSet vs TACO Play

Use RoboSet for multi-view, multi-task kitchen manipulation; use TACO Play when TFDS-compatible Franka kitchen interaction data is the more practical ingestion path.

human-to-robot transfer versus bimanual teleoperation data collection

UMI vs ALOHA

Use UMI when portable in-the-wild human demonstrations are central; use ALOHA when the buyer needs low-cost bimanual teleoperation data aligned to a robot platform.

assembly and long-horizon manipulation benchmark selection

FurnitureBench vs CALVIN

Use FurnitureBench for real-world long-horizon assembly demonstrations; use CALVIN for simulated language-conditioned long-horizon manipulation evaluation.

teleoperation collection design versus kitchen task training data

RoboTurk vs RoboSet

Use RoboTurk as a remote teleoperation collection reference; use RoboSet for a more kitchen-focused real-world multi-task manipulation corpus.

robotics dataset discovery and format standardization versus a defined cross-robot dataset release

LeRobot datasets vs Open X-Embodiment

Use LeRobot datasets when the distribution and format ecosystem matter; use Open X-Embodiment when the buyer needs a specific cross-embodiment corpus reference.

RESEARCH PATHS

Use this record as part of a broader dataset review

A dataset record is only useful when it connects into the rest of the buyer workflow. The next review step is usually not another summary; it is a fit check, rights triage, source comparison, or custom bounty spec that names the missing proof.

For physical AI teams, the hard question is whether the public source can support a specific model objective under real deployment constraints. That requires adjacent dataset records, tools, comparisons, and sourcing paths, plus external references that a reviewer can open and challenge.

Use the links below to keep the review grounded. Start broad when discovery is incomplete, move into profile and comparison pages when the candidate source is known, and switch to custom collection when the blocker is rights, consent, geography, robot embodiment, or target environment coverage.

INTERNAL LINKS

Continue the buyer workflow

EXTERNAL REFERENCES

Source context to verify

TRUELABEL ROUTING

Public data rarely closes the whole gap.

If the catalog surfaces a missing modality, geography, license, or deployment environment, tell us what you need and our team will route the request.

Request custom data