The Future of Being Human, Quietly Being Defined?
“ ….it also takes a lot of energy to train a human,…….It takes like 20 years of life and all of the food you eat during that time before you get smart. And not only that, it took the very widespread evolution of the 100 billion people that have ever lived” Sam Altman
1. The line that matters
Sam Altman’s provocation lands because it sounds like a simple correction. He says, in effect: “It takes like 20 years of life and all of the food you eat during that time before you get smart.” He adds that this “training” is not only personal but civilizational: modern competence sits on top of accumulated prior human knowledge, institutions, tools, and inherited infrastructure.
On the surface, this is an argument about energy and fairness. Underneath, it is an argument about what counts as a legitimate unit of comparison—and therefore what counts as a legitimate basis for governance.
2. The zero framework
The clean version of the argument is straightforward. Critics often compare the one-time energy required to train a frontier model with the marginal energy of a trained human answering a single question. Altman’s reply is that this is structurally inconsistent. It compares capability formation on one side with capability use on the other.
If the debate is genuinely about energy and resources, the fairer comparator is trained-to-trained, or full-stack-to-full-stack: what it takes to produce and operate a competent cognitive system, amortised over the life of that system. As an accounting correction, this can be coherent. The question is what else it installs while doing so.
Whatever the intent, the frame has a predictable effect: it makes capability the dominant unit of comparison.
3. Two kinds of commensurability
There is a distinction that tends to go unnamed. Descriptive commensurability says we can measure energy, infrastructure, and resource inputs across systems. This is a descriptive act. Normative commensurability says that because we can measure, we may now trade off systems against each other and optimise across them—treating them as comparable in value.
Altman’s comment begins as descriptive commensurability. But the metaphor, and the ease with which it travels, pushes toward normative commensurability. That shift is not a technicality. It is the philosophical hinge.
4. What the moment is structurally doing
The moment shifts the ground from moral intuition to commensurable accounting. It invites us to treat cognition as the output of a pipeline with inputs and outputs. On one side are food, care, education, and social stability. On the other are compute, chips, data centres, and industrial supply chains. The systems differ, but the structure becomes legible as comparable: energy and infrastructure are translated into usable cognitive service.
Once this is accepted, energy becomes more than a cost. It becomes a substrate for evaluating cognition, ranking choices, and justifying what should be built. The structural move is simple: capability becomes the unit of comparability.
5. The quiet reduction
To make “training a human” commensurable with “training a model,” the human must be re-described in a particular way: as a capability formation system whose worth becomes legible through outputs. This is a reduction. Not the claim that humans have capabilities—of course they do—but the claim that the human can be understood primarily through a capability–contribution framework: what capability can be produced, at what cost, and how it can be deployed as service.
The concern is not capability as such, but capability defined as service-output that can be priced, ranked, and substituted. The capability being privileged here is not human flourishing, but instruction-performing cognition that can be audited, benchmarked, and compared.
In that move, other frames of human value—intrinsic rights of life, dignity, relational being, moral agency—are displaced. They are not denied, but they are quietly repositioned as constraints around an optimisation problem rather than the ground of evaluation itself.
6. The invisible pathway: comparability opens redundancy
Here is the deeper consequence of making humans comparable in capability terms. Once the human is rendered as a capability carrier, and capability is framed as service output, the next step becomes almost automatic: substitution analysis. If a model can deliver the same cognitive service within accepted thresholds, then human capability becomes legible as redundant.
This is not only a labour-market story. It is a valuation story. Comparability does not merely enable “fair comparison.” It enables ranking capabilities by cost, throughput, and reliability. It creates a pathway where humans are redescribed as capability pipelines, capabilities are defined as substitutable services, services are priced and optimised, and redundancy is treated as rational efficiency whenever substitution crosses a threshold.
Once services can be procured and benchmarked, institutions naturally reorganise around procurement, compliance, and throughput. That is how the language of fairness becomes a language of replacement without ever declaring itself as such.
7. From labour reductionism to capability reductionism
Industrial modernity reduced humans to labour units—time, productivity, output. That reduction organised entire institutions: management, wages, schooling, labour law, and economic growth models. What is emerging now is more refined but still reductive. It shifts from labour to cognition, from bodily output to instruction-following performance. The human becomes a bundle of capabilities that can be compared, amortised, and substituted.
If labour reductionism flattened the human into a body that works, capability reductionism risks flattening the human into an agent that performs.
8. Compute as the reference frame
There is a further inversion hidden in the plausibility of the analogy. When “training” becomes the shared frame, compute quietly becomes the reference class for understanding intelligence. Human development becomes “training.” Education becomes “fine-tuning.” Culture becomes “corpus.” Civilization becomes “priors.”
Reference frames migrate into governance. They influence what is measurable, fundable, optimisable, and dismissible. They steer education toward “capability production.” They steer welfare toward “capability activation.” They steer citizenship toward “human capital maintenance.” They steer childhood toward “future performance potential.” When education is reframed as producing future cognitive throughput, the citizen quietly becomes an investment object.
A civilisation can adopt these frames without announcing that it has done so, because they arrive as “reasonable metrics.”
9. The fork: two models of governance
At this point, two coherent positions separate. Capability-first governance says that in a constrained world, allocation requires comparison, comparison requires commensurability, and capability accounting is therefore necessary. Energy, infrastructure, and capability returns become legitimate substrates for decision.
Intrinsic-life governance says commensurability is precisely the danger. Persons are not reducible to capability. Intrinsic rights of life are not “constraints” around optimisation but the foundation of legitimacy, and some trade-offs must be prohibited even when optimisation looks compelling.
Neither position can be dismissed as naïve. Each produces a different civilisation.
10. A better stance: rights first, capability second
A more robust stance is not a binary rejection of metrics, but an ordering. Intrinsic rights, dignity, and the irreducibility of persons should be first-order: non-tradeable commitments that define the legitimate space of action. Capability metrics can be second-order: useful instruments inside those boundaries, where trade-offs are permitted.
This matters because it prevents the quiet inversion where rights become exceptions to efficiency, rather than efficiency being a bounded tool inside rights.
11. The question beneath the energy debate
The question is not whether it takes energy to raise and educate a human. Of course it does. The question is whether we accept a civilisation whose default grammar makes humans legible primarily through capability and contribution, and therefore makes redundancy a rational outcome whenever substitution crosses a threshold.
If we accept that grammar, the future of being human will increasingly be negotiated through optimisation: costed, amortised, compared, and replaced where “good enough” holds. If we refuse that grammar, we have to do the harder work: specify what is non-optimisable, define legitimacy boundaries, and design allocation under constraint without smuggling comparability back in through managerial proxies.
12. Coda: why the soundbite matters
Altman’s line matters not because it is elegant, but because it is effective. It takes a debate about energy and turns it into a debate about capability, comparability, and governance. It is a boundary correction that doubles as a philosophical offer.
The offer is a civilisation organised around capability accounting. The cost is the quiet reduction of being human, and the opening of a pathway in which human redundancy becomes an administratively rational conclusion.
The question is whether we notice that this is what is being offered, before we accept it as “just common sense.”

Hi Indy, you may enjoy reading this post as an extension of your thinking ….https://www.theintrinsicperspective.com/p/why-we-stopped-making-einsteins
An important question! A weakness I see in the argument is that the hierarchy you propose is a claim. "Rights first, capabilities second" is something your text claims to be a good way of doing things, but I do not see a compelling reasoning behind it.
I think another strong argument can be made: Your text centers around capability, but only at the angle of personal capability. Also Altman seems to look at things only on that level. What about collective capability? Humans, having a body, body language, and emotions can interact with each other in ways which go beyond the abilities of artificial systems. And this will be so for a while. I think this unlocks, in terms of resilience, creativity, and collective survivability, a number of options that a pure bundle of individual capability AIs would not be able to replicate. I think this angle shows another important limitation of Altman's argument. But of course I am here still in the capability framework, arguing from a somewhat evolutionary lens. My argument would be as valid for robots vs. ants. I do realize that you are making a different point. I am just not sure what the compelling arguments are to follow it.
And of course, this argument would apply the other way as well: AIs certainly have capabilities to interact in ways which are not open to humans. For example, transferring learning between them, let us say by transferring modified sets of weights, is something we cannot do.
Your arguments made in many other of your writings that the self is relationally constituted, entangled, and cannot be separated out I very much agree with. I see my argument here as a kind of capability lens on that, maybe. The argumentative gap I see is: How does this lead to a rights-first stance? I think an argument about the irreducibility of persons is valid, but should not an argument about the irreducibility of collectives come with that? Would something like that yield something like a collective capability argument that validly counters Altman's claim in his own framing?
I am treading carefully here because I certainly do not claim to have a full understanding of your very extensive thinking.
As background, I am also thinking in the direction of risks and dangers in a "rights first" approach: Which rights exactly are intrinsic? The rights of whom? How to define 'dignity'?