Real-world POV Data: To Be Built with People, Not Just Algorithms

We are building a system to capture first-person human interaction data from real environments using wearable smart glasses.

Our smart-glasses platform is ready. The next step is deploying it with real contributors to begin collecting structured action data in everyday environments.

Outcome 1: GOAL
High-quality real-world data for embodied AI
Outcome 2: IMPACT
New earning opportunities through data contribution

From Language Prediction → Action Prediction

Language models predict the next token from context. Embodied AI systems must predict the next physical action. Real-world first-person data is critical to bridge that gap.

Language Model Prediction (reference)

A helpful analogy, but physical action prediction is the real bottleneck.

Language model prediction example

Action Prediction: e.g. Walking

Action prediction example: walking

Action Prediction: e.g. Cooking

Action prediction example: cooking

Action Prediction: e.g. Industrial Repair

Action prediction example: industrial repair

Real-world first-person data is required to train models that understand physical actions the same way language models understand words.

Human-centered data collection

Instead of lab environments, we plan to work with real contributors whose daily routines naturally include rich interaction with objects.

  • household workers
  • domestic helpers
  • service workers
  • everyday home environments

These environments create authentic interaction patterns robotics systems must eventually understand.

Real-world interactions at scale

Daily-life environments contain long-tail behaviors that simulations and lab data miss.

  • natural hand-object interaction
  • real clutter and environment variability
  • diverse routines and motion styles
  • practical, everyday tasks

This leads to data closer to real deployment conditions.

Deployment approach at community

Data collection does not scale through uploads alone.

We support deployment through:

  • local field operators
  • contributor onboarding and education
  • quality control at the source
  • ongoing engagement

Result: a reliable, repeatable data pipeline.

Smart glasses journey: prototype to MVP to product; CES showcase

Prototype → MVP → Product & CES showcase → Deployment

Our Advantage: Hardware + Deployment Experience

We are not starting from zero.

We have already developed wearable smart glasses enabling:

  • lightweight POV capture
  • scalable deployment
  • consistent data quality
  • real-world usability

This allows faster execution than software-only approaches.

Privacy-first by design

All captured footage is planned to pass through automated privacy filters before processing or annotation.

face blurring screen blurring personal identifier removal consent-based workflows

Privacy will be built into the infrastructure.