When everything becomes less hard
Organizations after ubiquitous superintelligence
Monday arrives with its usual small cruelties: the soft ping of another calendar invite, the righteous urgency of an email thread nobody asked for, the meeting that could have been a paragraph (but somehow instead became a Lovecraftian recurring ritual with an agenda template). I sometimes imagine an archaeologist of the far future excavating Outlook and concluding, quite reasonably, that we worshipped rectangles of time.
I have been thinking about the delightful essay Why is everything so hard in a large organisation? which I re-read every time I need to be concisely reminded that my circumstances are not merely personal weakness, nor a local pathology of my employers, nor evidence that everyone except me is incompetent.
It’s structural.
Large organisations are hard because humans doing multi-participant work are hard: incentives misalign, accountability diffuses, coordination is expensive, and change relentlessly erodes shared context. The essay’s diagnosis is bracingly simple, almost rude: a great deal of organisational pain is a repeated, ambient Prisoner’s Dilemma, played at scale, under time pressure, with imperfect information and minimal “skin in the game”.
All of which is fine, in the bleak way that accurate things often are. But suppose I take one step sideways into my favourite speculative world: one that now feels less like science fiction and more like a pending software update. Suppose, as we often do here, that superintelligent, agentic AI becomes ubiquitous and cheap. Suppose further that any intellectual labour (strategy, design, code, legal analysis, negotiation, documentation, forecasting, persuasion, the whole lot) is performed better by machines than by humans.
In that world, what does an “organisation” even look like?
The naive answer is that organisations evaporate: if machines can do everything, why bother with the messy, expensive, emotionally complicated primates? A tidier answer is that organisations become pure automation: a few servers, a few models, and a procurement account. Both answers are, I think, wrong in the interesting way. The awkward truth is that organisations are not merely engines for thinking. They are also engines for wanting, for authorising, and for being held responsible, and those functions do not vanish just because intelligent becomes abundant.
Thinking becomes free. Responsibility doesn’t.
The firm as a cognition machine made of people
It helps to start with an unromantic premise: much of what we call “management” is a coordination technology built out of human limitations. Meetings exist because we have scarce attention and leaky memory. Middle layers exist because information degrades as it travels. Process exists because trust is expensive. The Allen curve (the empirical fact that communication drops off sharply with distance) and Brooks’s law (adding people to a late project makes it even later) are not quirky footnotes; they are the operating constraints of the human substrate.
The “why is everything so hard” post is, among other things, an essay about those constraints. It describes how large organisations “solve” complex tasks by decomposing them into workflows executed by many average people because the “hero” strategy breaks on the hard limits of human bandwidth and skill distribution. It also points out that coordination is both necessary and painful: it requires time, literacy, emotional stamina, and a willingness to do work that does not look like “doing the work”. (A great organisational tragedy is that the work needed to make work possible is treated as suspiciously adjacent to idleness.)
Now flip the substrate. If agentic superintelligence is ubiquitous, coordination stops being expensive in the same way. Memory is no longer leaky. Context does not decay unless we choose to let it. Written communication is not a bottleneck because writing and reading are effectively instantaneous (and can be tailored to any level of detail, from haiku to formal proof, depending on what the recipient needs). The “hero” strategy returns, but the hero is no longer a rare human unicorn; it is a swarm of robot unicorns with perfect recall and relentless patience.
So do we finally get the fabled Monday that feels like Sunday?
Not quite. Something subtler happens: the locus of difficulty moves.
Coase in the age of cheap cognition
There is an old idea in economics that has aged remarkably well: Ronald Coase’s argument that firms exist because markets have transaction costs. If it is expensive to find suppliers, negotiate contracts, verify performance, and enforce agreements, you internalise those transactions inside a hierarchy. The boundary of the firm is, in this view, a boundary drawn around high transaction costs.
Superintelligent agents annihilate transaction costs, or at least they compress them by orders of magnitude. Searching for counterparties becomes trivial. Negotiation becomes fast, precise, and continuously revisable. Verification can be automated. Enforcement can be embedded in systems of monitoring and contractual “hooks” that mechanically trigger consequences. Translation across domains (legal, technical, financial) becomes, for the first time, genuinely fluent.
In a world like that, the economic reason for giant, stable hierarchies weakens. The firm’s boundary should shrink and become more fluid, because you can safely outsource and recombine activities without drowning in overhead. Many functions that used to require large internal departments start to look like composable services, purchased on demand and continuously audited.
This is the part where some people cheerfully declare the end of the firm. I am less sure. The boundary of the firm may shrink, but the boundary of ownership often does not. Physical assets, regulatory licences, distribution channels, brands, compute infrastructure, privileged data rights, these remain sources of advantage even when “smartness” is commoditised. So we may end up with fewer large organisations in the everyday, org-chart sense (the endless ladders of management, the sprawling departments), while still having large entities that hold and control assets.
The megacorp may well survive, but in the form of a thin shell.
The Prisoner’s Dilemma doesn’t disappear; it becomes loggable
The “why is everything so hard” post centres the Prisoner’s Dilemma for good reason. In large organisations, outcomes are produced collectively, while incentives and consequences are often individual (or at least local). The result is predictable: each participant has reasons to optimise their own position, even when doing so produces a globally worse outcome, and nobody is fully accountable for the aggregate mess1.
Superintelligent systems change the mechanics of this game in a way that is both liberating and slightly terrifying: they make traceability cheap. Every decision can be logged with its supporting evidence, its predicted consequences, and its alternatives. Every handoff can be annotated. Every delay can be causally analysed. The postmortem can be less narrative and more measurement.
In other words, “accountability” can become mechanical rather than social.
That sounds wonderful, until one remembers that mechanical accountability tends to mutate into metric accountability, and metric accountability has a nasty habit of turning into Goodhart’s law at scale: when a measure becomes a target, it ceases to be a good measure. If the system rewards on-time delivery, you will get on-time delivery (modulo definition games), perhaps at the cost of quality. If it rewards customer satisfaction, you will get satisfaction, perhaps at the cost of truth. If it rewards safety incidents at zero, you may get incident suppression, not safety. And so on.
Humans are extraordinarily creative when incentivised to game constraints; superintelligent systems, if mis-specified, could be even more creative (and far faster). So the new organisational problem is not merely “can we measure?” but “can we specify what we actually mean?”, and “can we do so in a way that resists adversarial optimisation?”
Specification becomes destiny.
The Prisoner’s Dilemma is not eliminated; it is shifted into the design of objective functions, constraints, and audits. The battleground becomes: who sets the rules, who can change them, and who has veto power.
Meetings die; parliaments survive
One of the most visible changes, I suspect, is that the information-bearing meeting fades. When AI agents can maintain shared state, produce perfect summaries, and resolve dependencies continuously, the weekly sync becomes a quaint superstition. “Let’s get everyone in a room to share updates” starts to feel like gathering the village to watch someone crank a generator.
But the meeting does not vanish entirely, because many meetings are not about information. They are about legitimacy.
Humans meet to establish that a decision was collectively owned, that the process was “fair”, that dissent was heard, that risk was socially distributed, that status was respected, that fear was managed. These are not intellectual functions; they are political and emotional ones, and they are stubbornly persistent. We might replace the spreadsheet meeting with an automated report, but we will keep the “constitutional” meeting: the one where trade-offs are acknowledged, commitments are made, and responsibility is taken (or, if we are honest, strategically avoided).
So we get fewer meetings that exist because we cannot write, read, or remember efficiently, and more meetings that exist because we are primates who require ritual to coordinate trust. The organisation becomes, in part, a parliament.
I would love to tell you this makes Mondays feel like Sundays… but, alas, it mostly just changes the flavour of the fatigue.
The new bureaucracy is constraint sprawl
Here is a perverse effect of cognitive abundance: when execution becomes cheap, constraints proliferate.
In today’s organisations, bureaucracy is often a crude substitute for trust and competence: checklists, approvals, committees, compliance regimes that exist because someone somewhere did something regrettable and the institution responded by pouring concrete over the problem. In an AI-driven organisation, many of these processes can be automated, which sounds like liberation. Yet the temptation will be to add more rules precisely because adding rules is now “free” in human time.
Every stakeholder can demand a constraint. Every risk committee can demand a guardrail. Every regulator can demand a reporting regime. Every PR team can demand a sensitivity filter. None of this feels costly, because the AI can implement it.
The cost appears later, as system complexity: conflicts between policies, brittle interactions between constraints, unexpected behaviour when old rules meet new contexts. This is what I think of as specification debt: the accumulating, poorly integrated pile of goals and prohibitions that define what the organisation is “allowed” to do, even when those permissions no longer map cleanly onto reality2.
In the human era, shared context decays because people forget. In the AI era, shared context can decay because policy ossifies.
A perfect memory is not the same thing as wisdom.
What is left for humans?
If machines outperform humans at every intellectual task, the remaining roles for humans are not “work” in the traditional sense. They are, instead, roles that exist because society demands a human-shaped locus of responsibility and meaning.
Someone still has to hold legal authority: to sign contracts, allocate capital, accept liability, certify compliance, and answer to regulators. Someone still has to adjudicate value conflicts that are not reducible to optimisation (at least not without smuggling the moral conclusions into the objective function and pretending we did not). Someone still has to be the face that communities can hold accountable, for better or worse.
This suggests a curious inversion. We do not keep humans in organisations because they are the best at thinking; we keep them because they are the only entities our legal and moral systems currently recognise as answerable. The human layer becomes thinner, but also heavier in consequence: fewer people, holding more responsibility, surrounded by systems that can do everything except be morally blamed.
There is also a second category of “human necessity” that is softer but no less real: institutions where the product is partly humans doing the thing. Universities, civic organisations, medicine (in some forms), art, religious communities, these survive not because AI cannot do the intellectual work, but because the work carries meaning when it is human. A concert played by robots may be flawless; it may also be, depending on one’s temperament, completely beside the point3.
The world will not run out of tasks. It will run out of tasks that confer human dignity by default. Organisations that understand this may matter more than those that simply automate.
New organisational shapes (or, the thin shell and the swarm)
If I try to picture the organisational landscape in this world, I see a handful of recurring forms.
One is the thin-shell megacorp: a large asset-holder with a minimal human governance layer (e.g., boards, signatories, public-facing representatives) resting atop a dense substrate of AI systems that plan, negotiate, audit, and execute. It looks less like today’s bureaucracy and more like an automated operating system for capital and capability.
Another is the liquid network: many small legal entities that spin up and recombine at machine speed, contracting with each other through AI-negotiated agreements and AI-verified deliverables. It resembles a market more than a firm, except that the “participants” are often synthetic agents acting on behalf of human owners or institutional mandates.
A third is the protocol organisation: an entity whose governance is explicit and partially machine-enforced with constitutions, permissions, audit trails, dispute resolution mechanisms. (There is a temptation to reach for “DAO” here, and perhaps that’s directionally right, but I hesitate, because the historical DAOs were mostly toys; ubiquitous superintelligence makes the underlying idea operational in a way blockchain technology alone could not.)
And then there are the human-meaning institutions, which may look, superficially, anachronistic: places where we insist on human participation precisely because we could have automated it, and we choose not to.
These forms coexist, collide, and hybridise. None of them abolish politics. They merely change its substrate.
The uncomfortable conclusion
The original post ends with a kind of pragmatic stoicism: once you understand the structural sources of organisational friction (misaligned incentives, costly communication, change) you can navigate them with less surprise and more agency. I find that comforting, in the same way that learning the physics of turbulent airflow is comforting when you are stuck on a bumpy flight.
In the superintelligent world, many of those frictions dissolve internally. Coordination becomes cheap. Context becomes durable. The small tragedies of email and meeting culture recede. Yet the deeper conflicts do not vanish, because they were never merely about intellect. They were about values and power: who gets to decide, who bears the cost, who is protected, who is sacrificed, and who gets to tell the story afterwards.
When thinking becomes free, the scarce resource is not intelligence. It is legitimacy.
And perhaps that is the real reason Monday will never fully become Sunday. Sunday is private. Monday is collective. Collective life requires governance, and governance, whether in a corporation, a university, or a nation-state, is the art of making choices under disagreement, and then living with them. Superintelligent agents can make those choices faster, clearer, and more auditable. They cannot make them painless.
At least not without changing what we mean by “we”.
And this seems even more apparent in universities than in many other types of organization.
One is tempted, almost beyond measure, to offer a quip on university governance here… but I shall hold my tongue as that is a topic for another day.
One wonders what effect this would have on contemporary (human) performance expectations in Western art music.
