Curated profiles
36 source-backed dataset pages with rights, consent, QA, and deployment-fit notes.
DATASET CATALOG
Compare robotics, egocentric, teleoperation, manipulation, simulation, and physical-world datasets by modality, task, format, license, commercial use clarity, and consent risk.
DIRECT ANSWER
truelabel’s dataset catalog helps physical AI teams decide whether public data is enough, where commercial use or consent risk is unclear, and what custom bounty data is still required for deployment. The curated catalog is paired with a broader Hugging Face robotics watchlist covering 1,000 source records.
36 source-backed dataset pages with rights, consent, QA, and deployment-fit notes.
1,000 robotics-tagged Hub records for discovery, prioritization, and future profile promotion.
26 head-to-head dataset comparison pages with buyer decision matrices.
CATALOG
36 of 36 datasets
Commercial use unclear · Multi-institution robot demonstration corpus; exact per-task scale varies by contributing dataset.
A large cross-institution collection of robot demonstrations spanning many embodiments and manipulation tasks.
Commercial use unclear · Large real-world manipulation corpus; check source for current release counts.
A real-world robot manipulation dataset focused on diverse teleoperated demonstrations outside narrow lab-only settings.
Commercial use unclear · Robot manipulation demonstrations across multiple tasks; source release describes exact split.
A robot manipulation dataset from Berkeley focused on real-world behavior cloning and task generalization.
Commercial use unclear · Large language-conditioned robot demonstrations described in the source paper and project materials.
A robotics transformer data release associated with language-conditioned robot manipulation research.
Commercial use unclear · Task-specific demonstrations released around the ALOHA platform and follow-on projects.
A low-cost bimanual teleoperation platform and dataset family used for imitation learning in dexterous manipulation.
Source appears permissive; verify data terms · Benchmark datasets and demonstration formats vary by task suite.
A benchmark and dataset framework for robot imitation learning with standardized tasks and evaluation utilities.
Commercial use unclear · Multi-robot manipulation dataset; source materials specify exact robot/task counts.
A multi-robot dataset for visual foresight and manipulation policy research.
Source appears permissive; verify data terms · Benchmark suite of simulated manipulation tasks.
A simulated manipulation benchmark for multi-task and meta-reinforcement learning.
Commercial use unclear · Large simulated task suite; source materials define current task count.
A simulated robot learning benchmark with many manipulation tasks in CoppeliaSim.
Commercial use unclear · Long-horizon simulated benchmark and demonstrations.
A benchmark for language-conditioned long-horizon robot manipulation in simulated environments.
Source appears permissive; verify data terms · Simulation suite with tasks and environments maintained by the ManiSkill project.
A simulation benchmark and toolkit for manipulation skills and embodied AI policy evaluation.
Commercial use restricted · Large-scale first-person video corpus with annotations; source controls exact access terms.
A large-scale egocentric video dataset focused on first-person human activity understanding.
Commercial use restricted · Large egocentric kitchen activity dataset with action annotations.
An egocentric video dataset of kitchen activities used for action recognition and human-object interaction research.
Commercial use unclear · Human-object interaction dataset with 4D annotations; exact release details live on the source site.
A 4D egocentric human-object interaction dataset with RGB-D and pose-oriented annotations.
Commercial use unclear · Dexterous grasping dataset with YCB objects and pose labels.
A dexterous hand-object interaction dataset centered on grasping YCB objects with 3D annotations.
Commercial use restricted · Large indoor RGB-D scene reconstruction corpus.
An indoor RGB-D reconstruction dataset used for 3D scene understanding.
Commercial use unclear · Multiple scene and task datasets under the AI Habitat ecosystem.
A family of embodied AI datasets and simulation assets for navigation and rearrangement research.
Source appears permissive; verify data terms · Interactive household simulation scenes and tasks.
An interactive simulated environment for embodied AI agents in household-like scenes.
Commercial use unclear · Household activity benchmark and simulation assets.
A benchmark for household activities and embodied AI tasks in simulation.
Commercial use restricted · Large human-object action video dataset; access and current terms are controlled by the dataset host.
A human action video dataset focused on object interactions and temporal reasoning.
Commercial use restricted · Large autonomous driving scenes with cameras and LiDAR.
A large autonomous driving dataset with camera, LiDAR, and labeled traffic scenes.
Commercial use unclear · Object-centric multimodal assets; source materials define current object count and modalities.
A dataset family for object-centric physical properties, geometry, and multimodal perception research.
Commercial use unclear · Task demonstrations and model references associated with the BC-Z project.
A behavior cloning project focused on zero-shot task generalization for robots.
Commercial use unclear · Dexterous manipulation demonstrations and visual observations; verify source for release details.
A dexterous manipulation dataset focused on multi-view visual observations and hand-object interaction.
Commercial use restricted · Large action recognition video corpus maintained through public benchmark releases.
A large video action recognition dataset used widely for video model pretraining.
Source appears permissive; verify data terms · Simulation tasks and assets for manipulation research.
A simulation framework and benchmark suite for robot manipulation tasks.
Commercial use unclear · Source describes more than 110,000 contact-rich manipulation sequences with visual, force, audio, action, and human demonstration signals.
A real-world contact-rich robot manipulation dataset with multimodal sensing, force, audio, and human demonstration video.
Commercial use unclear · Hugging Face organization page describes the Beta release as 1M+ trajectories and 2,976.4 hours across 217 tasks, 87 skills, 3,000+ objects, and 100+ real-world scenarios.
A large-scale real-world robot manipulation dataset family for fine-grained manipulation, tool use, and multi-robot collaboration.
Commercial use unclear · RoboCasa365 source materials describe 365 everyday tasks, 2,500 kitchen environments, 600+ hours of human demonstration data, and 1,600+ hours of synthetic demonstrations.
A large-scale kitchen simulation framework and dataset family for everyday manipulation tasks in diverse household environments.
Commercial use unclear · Benchmark datasets are organized around multiple LIBERO task suites, including spatial, object, goal, and long-horizon manipulation variants.
A benchmark suite for lifelong robot learning and language-conditioned manipulation tasks.
Commercial use unclear · Source describes 30,050 trajectories, including 9,500 collected through teleoperation, across 12 skills and 38 tasks with four camera views.
A real-world multi-task kitchen manipulation dataset with teleoperated and kinesthetic demonstrations.
Commercial use unclear · Project materials describe over 100 hours of real robot data and thousands of successful manipulation demonstrations collected through remote users.
A large-scale teleoperation data collection platform and dataset family for robot manipulation tasks.
Commercial use unclear · Project materials emphasize portable in-the-wild data collection and fast demonstrations for tasks such as cup manipulation, dish washing, cloth folding, and dynamic tossing.
Universal Manipulation Interface is an in-the-wild human demonstration framework for transferring portable gripper data to robot policies.
Commercial use unclear · Documentation describes 219.6 hours and 5,100 successful furniture assembly demonstrations collected with controller and keyboard inputs.
A real-world long-horizon furniture assembly benchmark with successful demonstration data.
Commercial use unclear · TensorFlow Datasets documentation lists TACO Play as Franka kitchen interaction data with train and test splits and a 47.77 GiB dataset size.
A kitchen robot manipulation dataset with Franka arm interaction data available through TensorFlow Datasets.
Commercial use unclear · LeRobot documentation describes a standardized dataset ecosystem on Hugging Face Hub using Parquet for tabular data and MP4 for video observations.
A Hugging Face robotics dataset ecosystem and standardized dataset format for multimodal robot learning data.
BROWSE BY
Egocentric video, teleoperation, RGB-D, point cloud, tactile.
Grasping, manipulation, navigation, household, warehouse, long-horizon.
Franka, UR5, ALOHA, Stretch, Sawyer, xArm, mobile manipulators.
RLDS, LeRobot, HDF5, MCAP, ROS bag, Parquet.
Apache-2.0, MIT, CC-BY, custom, research-only.
Allowed, restricted, unclear, research-only — beyond the license tag.
Side-by-side dataset decisions for buyer-relevant choices.
Ranked nearby datasets per source, scored by overlap on robot, modality, task, and format.
MODALITIES
COMPARISONS
Use Open X-Embodiment for broad cross-robot pretraining; use DROID when real-world manipulation diversity is the deciding factor.
Use Ego4D for broad egocentric activity coverage; use EPIC-KITCHENS for kitchen-specific hand-object action understanding.
Use RoboMimic for imitation-learning trajectory workflows; use Meta-World for multi-task simulated manipulation benchmarks.
Use DROID when in-the-wild manipulation diversity matters; use BridgeData V2 for Berkeley-style behavior-cloning baselines and task generalization references.
Use DROID for broader single-arm real-world manipulation; use ALOHA when bimanual teleoperation and coordinated two-arm tasks are the deciding factor.
Use DROID for real-world manipulation diversity; use RoboMimic for controlled imitation-learning benchmarks and repeatable evaluation workflows.
Use Open X-Embodiment for broad cross-embodiment coverage; use BridgeData V2 when the buyer wants a narrower manipulation baseline with clearer task-family focus.
Use Open X-Embodiment for cross-robot breadth; use ALOHA when bimanual teleoperation and low-cost dual-arm demonstrations are central to the model objective.
Use RLBench for language-conditioned simulated manipulation tasks; use ManiSkill for broader manipulation skill benchmarking and synthetic policy evaluation.
Use RLBench for task-rich simulated manipulation; use RoboSuite for controller experiments and standardized robot-manipulation environments.
Use Meta-World for multi-task reinforcement-learning benchmarks; use CALVIN for long-horizon, language-conditioned manipulation evaluation.
Use CALVIN for long-horizon manipulation in a focused simulated setup; use BEHAVIOR when household task taxonomies and embodied AI planning coverage matter more.
Use Ego4D for broad egocentric activity coverage; use HOI4D when geometry-aware human-object interaction and RGB-D context are more important.
Use EPIC-KITCHENS for kitchen-specific action understanding; use HOI4D when 4D hand-object geometry and broader object interaction are required.
Use DexYCB for dexterous hand-object pose and YCB object grasping; use HOI4D for broader egocentric human-object interaction with RGB-D context.
Use ScanNet for real indoor RGB-D scene reconstruction; use Habitat datasets for simulated embodied navigation and rearrangement tasks.
Use AI2-THOR for interactive household scenes and embodied agent experiments; use BEHAVIOR for broader household activity task specification.
Use ObjectFolder for object-centric geometry, material, and tactile references; use DexYCB for hand-object interaction and pose-labeled grasping examples.
Use RH20T when contact-rich multimodal sensing matters; use DROID when broader in-the-wild manipulation diversity and distributed collection are the deciding factors.
Use AgiBot World to study high-volume robot-data-factory releases; use Open X-Embodiment for cross-institution generalist policy pretraining references.
Use RoboCasa for large-scale kitchen simulation and household scene diversity; use LIBERO for structured lifelong-learning and language-conditioned manipulation benchmarks.
Use RoboSet for multi-view, multi-task kitchen manipulation; use TACO Play when TFDS-compatible Franka kitchen interaction data is the more practical ingestion path.
Use UMI when portable in-the-wild human demonstrations are central; use ALOHA when the buyer needs low-cost bimanual teleoperation data aligned to a robot platform.
Use FurnitureBench for real-world long-horizon assembly demonstrations; use CALVIN for simulated language-conditioned long-horizon manipulation evaluation.
Use RoboTurk as a remote teleoperation collection reference; use RoboSet for a more kitchen-focused real-world multi-task manipulation corpus.
Use LeRobot datasets when the distribution and format ecosystem matter; use Open X-Embodiment when the buyer needs a specific cross-embodiment corpus reference.
RESEARCH PATHS
A dataset record is only useful when it connects into the rest of the buyer workflow. The next review step is usually not another summary; it is a fit check, rights triage, source comparison, or custom bounty spec that names the missing proof.
For physical AI teams, the hard question is whether the public source can support a specific model objective under real deployment constraints. That requires adjacent dataset records, tools, comparisons, and sourcing paths, plus external references that a reviewer can open and challenge.
Use the links below to keep the review grounded. Start broad when discovery is incomplete, move into profile and comparison pages when the candidate source is known, and switch to custom collection when the blocker is rights, consent, geography, robot embodiment, or target environment coverage.
INTERNAL LINKS
Scan the broader robotics dataset surface before narrowing into promoted profiles, comparisons, and custom collection specs.
Track source updates, licensing notes, and buyer-readiness changes that should trigger a renewed review.
Score whether a public source is enough for the model, rights path, modalities, and target environment.
Separate source license language from contributor consent, redistribution, private-space risk, and model-use assumptions.
Turn a public-source gap into a scoped capture request with sample QA, metadata, and delivery requirements.
Compare data providers when the answer is not another public dataset but a better sourcing or capture route.
Use the company index to separate annotation vendors, data engines, marketplaces, and specialist capture teams.
EXTERNAL REFERENCES
Market context for why physical AI systems need custom, enriched, real-world data beyond generic labeling workflows.
Robotics dataset and tooling context for Hugging Face based collection, sharing, conversion, and training workflows.
A cross-embodiment robotics dataset reference for comparing trajectory scale, robot diversity, and VLA training assumptions.
A large in-the-wild robot manipulation dataset reference for real-world trajectory capture and deployment transfer risk.
TRUELABEL ROUTING
If the catalog surfaces a missing modality, geography, license, or deployment environment, tell us what you need and our team will route the request.