DoorDash's new Tasks app pays gig workers to record daily physical activities — laundry, cooking, navigation — as training data for AI and robotics models.
DoorDash launched Tasks, a gig work app unrelated to food delivery, that pays users to record video of themselves performing physical tasks like folding laundry, frying eggs, and navigating spaces. Workers strap smartphones to their chests and film their hands clearly to generate training data for generative AI and humanoid robotics systems. The app is currently blocked in California, New York City, Seattle, and Colorado. DoorDash plans to expand the range of tasks and user availability over time.
DoorDash just validated a crowdsourced human-data pipeline for physical-world AI training at consumer scale. This is less about the app itself and more about what it signals: high-quality embodied action video data is now a purchasable commodity, not a lab-only resource. If you're building computer vision or robotics models, third-party crowdsourced pipelines like this are becoming a real alternative to proprietary data collection.
If your team needs physical-world action video data for robotics or CV models, benchmark DoorDash Tasks against Scale AI and Mechanical Turk on cost-per-labeled-clip this week — the gig model may undercut enterprise annotation vendors by 40-60%.
Go to the DoorDash Tasks app (if available in your state), complete one $3–5 task, then reverse-engineer the data schema: what metadata, hand-visibility requirements, and frame constraints they enforce. That spec tells you exactly what robotics labs are paying for.
Tags
Signals by role
Also today
Tools mentioned