The Value of Being Human
1. The Fork Beneath the AI Debate
Beneath the current debates about artificial intelligence, productivity, and labour markets lies a deeper philosophical question. The issue is not simply whether machines can outperform humans at defined tasks. It is whether we understand the human being as a fixed bundle of capabilities, or as a developmental system whose future capacities are not yet fully knowable.
This is an ontological choice. It determines how we define value, how we allocate capital, how we design education, and how we structure our institutions.
2. The Closed View: Human as Defined Capability
In the dominant frame today, the human is treated as a collection of capabilities that can be specified, measured, and compared. Reasoning, creativity, emotional intelligence, coordination — these are functions that can be listed, benchmarked, and evaluated.
Once defined in this way, humans enter a comparative logic. If machines match or exceed human performance at certain tasks, substitution appears rational. If machines are cheaper or more efficient, replacement becomes an optimisation decision. Within this frame, redundancy is not a moral failure; it is simply a technical outcome.
This position is internally coherent. However, it rests on a significant assumption: that the human is a closed set of functions whose value can be definitively stated in advance. Once that assumption is accepted, collapsing human worth into a capability contribution model feels objective. But this objectivity depends on treating the human as already fully defined.
3. The Developmental View: Human as Open System
An alternative view understands the human not as a finished capability set but as an evolving trajectory. Human capacities have never been static. Writing reshaped memory. Printing reshaped cognition. Industrial systems reshaped coordination. Digital networks reshaped perception and social life.
Under major technological transitions, humans do not simply compete; they reorganise. The space of possible competencies shifts.
An open ontology of the human makes a restrained claim: the full range of future human capacities cannot be completely specified in advance, particularly under transformative technologies like AI. If this is true, then any valuation system that assumes completeness risks being epistemically overconfident.
4. Measurement Versus Governance by Measurement
It is important to distinguish measurement from governance by measurement. Measuring performance is not inherently problematic. The difficulty arises when measurable capability becomes the primary basis for allocation, status, and institutional design.
When metrics become targets, systems reorganise around them. What can be measured becomes what is rewarded. Over time, this exerts selection pressure. Education systems, workplaces, and cultural incentives gradually align human development with what is legible and comparable.
In this way, the act of making humans comparable begins to shape them into what is comparable. Optimisation replaces development, and comparability replaces irreducibility.
5. The Risk of Developmental Compression
When human value is reduced to present measurable capability, developmental pathways that do not align with existing metrics receive less investment and recognition. Over time, this can narrow the range of human development.
Development is path-dependent. Once institutions optimise around specific benchmarks, alternative forms of cognition, creativity, or cooperation may diminish. Some developmental trajectories may become difficult or impossible to recover.
The concern is not simply that humans might be undervalued today. The deeper concern is that we may foreclose forms of human becoming that have not yet emerged.
6. The Value of the Unknown
In financial contexts, optionality carries value because the future is uncertain. We recognise that unknown possibilities justify caution against premature closure. Yet when evaluating humanity itself, we often rely on static productivity measures.
If humans are developmental systems, then their unknown potential has structural value. This value cannot be fully captured within present capability frameworks because those frameworks are defined by current tasks and benchmarks. Treating humans as static inventories rather than evolving systems risks mispricing their long-term contribution.
The argument is not sentimental. It is prudential. Under conditions of uncertainty, it is unwise to collapse a developmental substrate into a static valuation model.
7. AI as Developmental Selection Pressure
AI does not predetermine human decline or elevation. It introduces selection pressure. It may expand human capacities by reshaping cognition and collaboration. It may also compress development by incentivising narrow optimisation toward machine-comparable tasks.
Which outcome dominates depends less on the technology itself and more on the institutional frameworks within which it is embedded. AI can amplify development or narrow it. The difference lies in how we encode value.
8. Closed Ontology and Open Ontology
The distinction can be summarised clearly.
In a closed ontology, humans are definable bundles of functions. Capacities are enumerable. Value is measurable. Substitution is rational. Optimisation is primary.
In an open ontology, humans are developmental systems. Capacities are emergent and partially unknowable. Value cannot be fully specified in advance. Substitution must be approached with humility. Preservation of developmental possibility becomes central.
This philosophical difference directly shapes institutional design. A closed ontology builds systems oriented toward substitution. An open ontology builds systems oriented toward developmental preservation.
9. The Core Question
The central issue is not whether humans outperform machines at specific tasks. It is whether we treat the human future as still emergent — not yet fully nameable, measurable, or replaceable.
If we assume a closed ontology, definitive valuation and substitution follow logically. If we assume an open ontology, we must design institutions that preserve human developmental possibility and resist premature optimisation.
The choice between these positions will quietly determine whether our systems compress human potential into current benchmarks or preserve the conditions under which new forms of human becoming can emerge.

Made me think about something I heard from Rory Sutherland once in a podcast, then reported by Sahil Bloom in his newsletter.
The example of an hotel firing the doorman to install an automatic door.
“Two years later, the hotel's a catastrophe...because the doorman was doing multiple things, many of which were human and kind of tacit. Security would be one...Hailing taxis, dealing with luggage, recognizing regular guests, providing status to the hotel—there are loads and loads of value creation components to that doorman which aren’t captured in the open-the-door definition.”
Sutherland coined the term Doorman Fallacy to describe this phenomenon.
In the words of Bloom: “It arises when you ground your understanding of value in only the most visible function or skills, while failing to appreciate the full scope of tangible and intangible value that exists just under the surface.”
If we assume humans are still becoming (and I do), our work is to cultivate systems that allow developmental possibility to unfold.