This Home Robot Learned to Do Your Dishes from 10 Million Real Family Routines

Sunday Robotics launched Memo, a household robot trained on 10 million real family routines using $200 Skill Capture Gloves—clearing tables, loading dishwashers, and folding laundry without a single teleoperation trajectory.

This Home Robot Learned to Do Your Dishes from 10 Million Real Family Routines

Here's the dream: You finish dinner, walk away from the table, and an hour later the dishes are clean, the dishwasher's running, and you didn't lift a finger. Not because you have a spouse or roommate doing cleanup—because you have Memo.

On Tuesday November 19th, Sunday Robotics emerged from stealth with $35M from Benchmark and Conviction to launch Memo, a two-armed rolling household robot trained on over 10 million episodes of real family routines. Unlike competitors betting on humanoid forms, Memo rolls on a stable wheeled base and tackles the unglamorous stuff families actually need: clearing tables, loading dishwashers, folding laundry, organizing shoes, and pulling espresso shots (read more here).

Here's what makes Sunday different: Instead of using slow, expensive teleoperation to train robots (where humans remotely control robot arms), Sunday built Skill Capture Gloves—wearables that record how real people move, clean, and organize in their actual homes. The gloves share the exact geometry and sensors as Memo's hands, meaning if you can do it wearing the glove, Memo can learn it.

The economics are staggering: The Skill Capture Glove costs $200 versus $20,000 for teleoperation equipment—two orders of magnitude cheaper. "It also allows us to scale diversity faster," founder Tony Zhao explained. "You can collect data anywhere without needing to move robots around."

"If the only thing we rely on is teleoperation to get training data, it will take decades," Zhao noted in a detailed thread. For context: even Tesla, with millions of vehicles collecting driving data every day, took a decade to accumulate enough for real progress. Sunday's bet is that 8 billion humans on Earth can bootstrap robot intelligence faster—if you can capture their movements efficiently. Sunday has shipped over 2,000 gloves to "Memory Developers" across 500+ homes, collecting the messy, chaotic, real-world data that makes Memo work in any kitchen—not just controlled lab environments. The team is "constantly surprised by what we find—from cats in dishwashers to bucketloads of plums on the table."

But there's a problem: humans wearing gloves vary in height, arm length, and body proportions. How do you convert human movements into robot movements? Sunday developed Skill Transform, a software pipeline that converts glove data into robot-compatible actions with a 90%+ success rate. The result: if a human can do it wearing the glove, Memo can learn it.

What Memo can do:

  • Dishes: The table-to-dishwasher task is what Zhao calls "the classic nightmare scenario for roboticists"—long-horizon, highly dexterous, precise, whole-body manipulation combined with delicate, transparent, reflective, and deformable objects. Yet Memo handles it naturally and elegantly.
  • Wine glass precision: Push down with too much force? Shatter. Insert the wrong prong? Shatter. Sunday broke many glasses during development but achieved zero breakages over 20+ live demo sessions.
  • Laundry: Folds piles of socks and handles clothing.
  • Coffee: Pulls espresso shots with proper crema.
  • Zero-shot generalization: Works in homes it's never seen before (deployed to 6 unseen Airbnbs successfully).

Regarding the dishes, the numbers are staggering: ACT-1 autonomously performs 33 unique and 68 total dexterous interactions with 21 different objects while navigating more than 130 feet. In plain terms, one dinner cleanup means dozens of grab-move-place sequences—picking up a plate, scraping food into the trash, placing it in the dishwasher rack, grabbing a wine glass, and so on—across your entire kitchen, all without a human touching the controls. Yet Memo handles it naturally and elegantly.

And regarding zero-shot generalization, ACT-1 is the first foundation model that combines long-horizon manipulation with map-conditioned navigation in a single end-to-end model—give it a 3D map of a new home, and it figures out where to go. Most robots can either grab things OR navigate a room, but not both at once; Memo does both in one brain, no re-training required.

The design is deliberately non-threatening: Memo wears colorful baseball caps with a 360-degree camera underneath, sports a soft silicone shell, and moves at about 50% of human speed. One observer noted Memo's dexterity: "It had two wine glasses in one hand and just nudged the dishwasher door open with the back of its wrist."

The timeline: It took Sunday over a year to engineer the core infrastructure. Then they spent just three months producing all the autonomous results in their launch videos. That rapid progress from infrastructure to capability is exactly what investors bet on.

The simulation vs. real-world debate: Interestingly, Zhao reposted CMU's VIRAL system this week—a competing approach that trains humanoid robots entirely in simulation using zero teleoperation and zero real-world data.

VIRAL (Visual Sim-to-Real at Scale) achieved 54 autonomous loco-manipulation cycles (walk, stand, place, pick, turn) using a simple recipe: reinforcement learning, simulation, and GPUs. The pipeline trains a privileged "teacher" with full state access in sim, then distills a vision-based "student" policy using large-scale simulation with tiled rendering across tens of GPUs. According to the research paper, scaling visual sim-to-real compute to 64 GPUs and accelerating physics 10,000x real-time in NVIDIA Isaac Lab was "necessary for convergence and robustness" on long-horizon tasks.

The contrast with Sunday is stark. While VIRAL bets on pure simulation with photorealistic rendering and zero human data, Sunday is betting on real-world data supremacy. "Data is the bottleneck of robot learning," the VIRAL team noted—but their solution is to generate it synthetically, while Sunday's is to capture it from real families doing real chores.

The team behind it reads like a who's who of AI talent. Co-founders Tony Zhao (CEO) and Cheng Chi (CTO) are Stanford PhD roboticists known for breakthrough work on ALOHA, Diffusion Policy, and UMI. They've assembled a murderers' row of ex-Tesla Autopilot engineers—including Nishant Desai (formerly FSD autonomy), Nadeesha Amarasinghe (who led ML systems for FSD and Optimus), and Perry Jia.

An undergrad researcher, Alper Canberk, went viral this week as well for training all of Sunday's models. As one engineer put it, "this guy is an undergrad and is the only person training Sunday's models. wow."

Cheng Chi said Alper is "genuinely the most cracked full-stack roboticist I've ever met"—co-designing everything from ML infrastructure to CNC machining to PCB design. "As a robotics researcher, I've always had the urge to learn the full stack: Control, SLAM, ML Infra, Web, Mobile, Cloud, CNC, PCB, Firmware, DataOps, Supply Chain," Chi explained. "Here, I sit across the desk from deep experts in each of these fields. The bandwidth for learning is insane."

Here's an interview with the team, talking about their time working in a house in Mountain View with 16 3D printers running 24/7 in their garage before they got to this moment, and their strategy behind how they plan to win the humanoid robotics race.

Why this matters: Home robotics has been stuck in a data deadlock. Most robots are adapted from industrial machines and trained in sterile labs, then fail spectacularly when they encounter the chaos of real homes. Sunday's insight? Train on the chaos from the start. Their ACT-1 foundation model was trained on zero conventional robot data—only human demonstrations via the Skill Capture Glove.

The industry is splitting into two camps: teams betting on simulation to generate unlimited synthetic data, and teams betting on real-world data collection at scale. Sunday has chosen the latter, and raised $35M to prove that the messy reality of 500 real homes beats the pristine physics of any simulator.

Expect rapid capability expansion. In December 2024, Memo had one arm and could only organize shoes. By October 2025, it was folding socks, handling glassware, and making espresso. Sunday projects they'll scale manufacturing to reduce costs by at least 50% and begin shipping to 50 founding families in late 2026, with broader consumer deployment in 2027-2028.

If you're tired of doing dishes and want to be an early tester, applications for Sunday's Founding Family Beta opened this week at sunday.ai. "Sunday is for you" isn't just clever branding—it's a bet that the first truly useful home robots won't look like C-3PO. They'll look like Memo: friendly, rolling, and trained on exactly what your family needs done.

cat carticature

See you cool cats on X!

Get your brand in front of 550,000+ professionals here
www.theneuron.ai/newsletter/

Get the latest AI

email graphics

right in

email inbox graphics

Your Inbox

Join 550,000+ professionals from top companies like Disney, Apple and Tesla. 100% Free.