Glossary
Egocentric data
Egocentric data means first-person video or sensor data captured from the perspective of a person or embodied actor. The term matters because it turns a model or procurement concept into concrete data requirements you can evaluate samples against.
Quick facts
- Ego4D
- 3,670 hours • 74 locations • 9 countries • 923 wearers • 13 university partners (Feb 2022) • Data Use Agreement required
- EPIC-KITCHENS-100
- 100 hours • 45 kitchens • 4 cities • 90,000 action segments • CC BY-NC 4.0 (non-commercial)
- Project Aria (Meta)
- 200+ research partners • datasets including Aria Everyday Activities, Aria Digital Twin, Aria Everyday Objects, Nymeria, HOT3D
- What public corpora don't carry
- Buyer-specific tasks, environments, SKU sets, hand-pose tracks, and consent attached to a commercial use.
Comparison
| Question | Answer |
|---|---|
| Where it appears | Sourcing specs, QA requirements, dataset manifests, and buyer review notes |
| Why it matters | It turns abstract AI language into a supplier-verifiable requirement |
| Common failure | Using the term without defining modality, format, rights, or acceptance criteria |
How to use this term in a spec
Egocentric data is valuable because it records activity from the actor's point of view, so the camera captures hands, objects, tools, and scene transitions as the task unfolds. Ego4D formalizes this category as large-scale first-person video collected around the world for understanding daily human activity. [1]
What to avoid
Do not use egocentric data as a vague keyword. Define the data files, metadata, rights, QA checks, and delivery format that make it measurable.
Egocentric data in buyer review
A buyer cannot treat first-person footage as interchangeable stock video. Ego4D, EPIC-KITCHENS, and HOI4D illustrate that useful egocentric datasets depend on task labels, object interaction context, and collection protocols that make the footage interpretable. [2] [3] [4]
Egocentric data supplier evidence
Supplier samples should show field of view, stable capture, hand or tool visibility, clip boundaries, and consent artifacts. Without those details, egocentric data may be visually plausible but unusable for a physical AI buyer's review workflow.
Related pages
Use these to move from category-level context into specific task, dataset, format, and comparison detail.
External references and source context
- Ego4D: Around the World in 3,000 Hours of Egocentric Video
The Ego4D paper defines and studies egocentric video as first-person video capturing human activity at large scale.
arXiv ↩ - Egocentric video remains useful but incomplete for robot data buyers
The Ego4D project site presents a large egocentric video dataset for first-person activity understanding.
ego4d-data.org ↩ - Project site
EPIC-KITCHENS is an egocentric dataset centered on kitchen activities and human-object interaction.
epic-kitchens.github.io ↩ - Project site
HOI4D documents hand-object interaction data with 4D annotations, showing why egocentric capture needs object and action context.
hoi4d.github.io ↩
More glossary terms
FAQ
What is Egocentric data?
Egocentric data is first-person video or sensor data captured from the perspective of a person or embodied actor.
Why does it matter for physical AI?
It matters because physical AI data must be connected to actions, environments, metadata, rights, and model use, not just raw files.
How should buyers spec it in a sourcing request?
Specify field of view, task boundaries, consent artifacts, and whether hands or tools must stay visible.
Can suppliers validate this from samples?
Yes, if the buyer defines visible evidence, metadata requirements, and acceptance criteria before suppliers submit files.
Find datasets covering egocentric data
Truelabel surfaces vetted datasets and capture partners working with egocentric data. Send the modality, scale, and rights you need and we route you to the closest match.
Request egocentric data data