Records
4
HF AUTHOR CLUSTER
4 robotics-tagged HF records from haosulab, totaling 6,235 cumulative downloads. Some records cite published arxiv research.
DIRECT ANSWER
Author clusters consolidate every record from one publisher into a single buyer-review surface. haosulab ships 4 robotics datasets on Hugging Face. Top license: apache-2.0. Tier breakdown: 1 indexable as Tier A, 1 as Tier B, 2 demoted (those URLs redirect here).
4
6,235
apache-2.0
DATASETS
4 of 4 datasets
2,644 downloads · apache-2.0
ManiSkill Demonstrations This dataset repo contains all of the latest ManiSkill demonstration datasets as well as some pretained model weights used to generate some demonstrations. To download by envi
1,868 downloads · mit
Assets for Real2Sim evaluation of the Bridge v2 dataset from https://github.com/simpler-env/SimplerEnv/
894 downloads · apache-2.0
ManiSkill2 Data Update: ManiSkill 3 has been released https://github.com/haosulab/ManiSkill/. It uses different datasets than ManiSkill2 so the data here is not expected to transfer over ManiSkill2 is
829 downloads · cc-by-4.0
AI2THOR Scene/Object Dataset for ManiSkill This is a modified version of the Habitat Synthetic Scenes Dataset (HSSD) which is also modified based on the original AI2THOR scenes and assets in order to
RESEARCH PATHS
A dataset record is only useful when it connects into the rest of the buyer workflow. The next review step is usually not another summary; it is a fit check, rights triage, source comparison, or custom bounty spec that names the missing proof.
For physical AI teams, the hard question is whether the public source can support a specific model objective under real deployment constraints. That requires adjacent dataset records, tools, comparisons, and sourcing paths, plus external references that a reviewer can open and challenge.
Use the links below to keep the review grounded. Start broad when discovery is incomplete, move into profile and comparison pages when the candidate source is known, and switch to custom collection when the blocker is rights, consent, geography, robot embodiment, or target environment coverage.
INTERNAL LINKS
Use the catalog to compare source-backed dataset profiles by modality, task, rights signal, consent risk, and deployment fit.
Scan the broader robotics dataset surface before narrowing into promoted profiles, comparisons, and custom collection specs.
Track source updates, licensing notes, and buyer-readiness changes that should trigger a renewed review.
Score whether a public source is enough for the model, rights path, modalities, and target environment.
Separate source license language from contributor consent, redistribution, private-space risk, and model-use assumptions.
Turn a public-source gap into a scoped capture request with sample QA, metadata, and delivery requirements.
Compare data providers when the answer is not another public dataset but a better sourcing or capture route.
Use the company index to separate annotation vendors, data engines, marketplaces, and specialist capture teams.
EXTERNAL REFERENCES
Market context for why physical AI systems need custom, enriched, real-world data beyond generic labeling workflows.
Robotics dataset and tooling context for Hugging Face based collection, sharing, conversion, and training workflows.
A cross-embodiment robotics dataset reference for comparing trajectory scale, robot diversity, and VLA training assumptions.
A large in-the-wild robot manipulation dataset reference for real-world trajectory capture and deployment transfer risk.
TRUELABEL ROUTING
If the Hub records don't carry the license, consent, or deployment fit your team needs, commission a custom collection on the same modality with explicit commercial terms.