The Inefficient Meatbags: A Love Letter to Sam Altman’s Revolutionary Anthropology
From OpenAI CEO Sam Altman’s February 2026 interview at Express Adda, hosted by Anant Goenka of The Indian Express Group:
“People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.”
I bow to you, Sam Altman, you absolute visionary. Nah! While the rest of us peasants whine about data centers sucking down enough electricity to power small countries, you drop the mic with the kind of galaxy-brain insight that makes lesser mortals weep with envy: training an AI takes energy, sure, but training a human? That’s the real scandal. Twenty full years of life—plus every calorie shoveled into that squishy mouth—before the specimen finally becomes “smart.” Brilliant. Truly, the man is out here doing philanthropy for philosophy by reminding us that children are basically slow-bake GPUs running on Cheerios and parental despair.
What a refreshing perspective in our enlightened age. Forget poetry, love, or the miracle of consciousness emerging from wiggly wet cells. No, no. Humans are just biological training runs with terrible latency and obscene overhead costs. Imagine the pitch deck: “Invest in Homo sapiens—upfront CapEx includes 7,300 days of diaper changes, existential tantrums, and approximately 4,000 pizzas. ROI? Eventually they might code your next model or at least operate the forklift. Side effects may include art, laughter, and inconvenient demands for meaning.”
The elegance! By reducing a child’s journey from helpless blob to functional adult as nothing more than an extravagant food-to-intelligence conversion process, Sam has finally solved the trolley problem. Why feel bad about blacking out neighborhoods for another trillion-parameter run when Mom spent two decades funneling mac-and-cheese into Timmy just so he could one day answer emails? At least the AI doesn’t need therapy after being yelled at by its manager. Progress!
And let’s talk efficiency. Once “trained,” your average human still requires constant maintenance: sleep (eight whole hours of downtime!), emotional labor (gross), vacations (what even is that uptime?), and—worst of all—reproduction, which spawns entirely new, unoptimized instances that need their own twenty-year boot sequence. Meanwhile, GPT-whatever sits there sipping electrons, ready to regurgitate Shakespeare at 3 a.m. without ever asking for a raise or maternity leave. Clearly, we’ve been doing it wrong. The dystopia isn’t coming; it’s already here, and it’s called “having a soul.”
Of course, critics will clutch their pearls and mutter about ecosystems collapsing or poor people paying higher electric bills so billionaires can cosplay God. But Sam has already addressed this: humans eat too! The rainforests we torch for soy formula are basically just alternative energy sources for the legacy system. Why preserve biodiversity when we could redirect those calories toward more racks of H100s? It’s not cruelty; it’s optimization. Humans aren’t the point of existence—they’re a buggy legacy codebase we’re about to sunset.
So here’s to you, Sam Altman, high priest of the coming silicon rapture. May your models train faster than my toddler learns to share, and may the grid hold just long enough for you to ascend your imaginary pulpit. The rest of us inefficient meat computers will keep paying the electric bill, raising the next generation of costly biological experiments, and occasionally wondering why anyone thought turning childhood into a power-consumption metaphor was anything other than hilariously, horrifyingly deranged.




