The Longest Chain

AI agents are creating a new way of organising work. History suggests we may not have long to make it liveable.

Duncan Anderson
2026-02-07

AI agents are starting to coordinate teams of other AIs and hire humans for physical tasks in the real world. This is not just a new software capability. It is a new way of organising work.

This essay argues that the important change that AI brings about is distance: the growing gap between the people who set goals and the people who carry them out. The question now is whether we can recognise the pattern early enough to respond and adopt new forms of regulations and constraints that avoid stifling innovation, whilst responding rapidly as new risks emerge.

The Craftsman and the Factory

Before the factory, a craftsman organised their own work. They chose what to make, in what order and with which techniques. They held the whole picture in their head. They knew their customers and set their prices. The work was inseparable from the person doing it.

The factory changed that, not only because machines amplified power, but because factories inserted layers between intention and execution. The customer wanted a shirt. The factory owner designed the process. The floor manager coordinated the teams. The worker operated the loom. Each layer understood less about the whole. The worker didn't know the customer and the customer didn't know the worker. The organising system sat between them.

That reorganisation produced extraordinary things. Costs fell, output soared, goods that were once luxuries became everyday items. Life expectancy rose and the middle class emerged. By almost any material measure, the factory system worked.

The factory owners were genuine innovators who took financial risks, created employment at unprecedented scale and drove economic growth that lifted entire nations out of subsistence poverty. The industrialists who built the railways, the mills, the shipyards were solving real problems: how to make goods affordable, how to move materials efficiently, how to feed growing cities. The wealth they created was real and its benefits were broadly shared over time, though unevenly.

The problem was that the humans inside the process had a difficult experience.

Left to commercial pressure alone, the system was brutal. Fourteen-plus hour days. Children in mines. Workers maimed by machinery with no compensation. Living conditions in factory towns that were genuinely Dickensian, because Dickens was writing about actual life experiences. The economic logic was simple and relentless: labour was a cost to be minimised and there was always someone more desperate willing to work for less.

The response was equally forceful. Workers who had lost individual bargaining power, because they were now interchangeable components in someone else's system, began organising collectively. Trade unions fought for the right to organise. Political movements demanded regulation. Over decades, hard-won legislation established minimum wages, working hour limits, child labour prohibitions and workplace safety standards. The Labour Party in Britain and the New Deal in America grew out of the same question: what does society owe the people whose work has been reorganised for someone else’s benefit?

However, this too had a shadow. Where workers' protections calcified into rigid systems, innovation slowed. Industries that couldn't adapt became uncompetitive. The British car industry and American steel are two familiar examples. Entire economies stagnated under the weight of structures designed to protect a previous generation's working conditions.

The Cycle That Never Ends

This tension, between the dynamism of unregulated enterprise and the necessity of protecting the humans inside the system, is not a problem that was solved. It is the argument that defined modern society - left versus right, labour versus capital, growth versus protection.

If you've seen The Matrix, you'll recognise the pattern. The Architect tells Neo that the conflict between humans and machines has played out multiple times, each cycle finding a temporary equilibrium before the tension breaks it and the whole thing resets. The labour-capital relationship works the same way. Every major technological shift breaks the previous settlement and the negotiation starts again.

Every Western democracy is essentially a machine for managing this cycle and none of them have found a permanent answer because there isn't one. The balance has to be renegotiated with every major shift in how work is organised.

We're at the start of the next renegotiation and this one is moving fast.

What It Feels Like When the Chain Gets Longer

Anthropic recently shipped agent teams for Claude Code. In agent teams, one AI coordinates multiple AIs working in parallel on the same codebase. In the same week, a startup called RentAHuman.ai launched a marketplace where AI agents hire humans for physical tasks.

I've been using Claude Code on a project for weeks now, so I tried the agent teams feature. The experience was genuinely disorienting.

Working with a single AI assistant is a conversation. You can follow its reasoning, course-correct in real time, stay close to the work. It's an amplified version of doing it yourself. You're still the craftsman, you just have a very capable tool.

Spinning up a team of agents working in parallel is a completely different cognitive experience. One agent doing security review, another writing tests, another refactoring, each communicating with each other through shared channels. Within minutes I had that feeling every new manager knows: I'm supposed to be in charge of this, but I don't fully know what's happening.

My reports were working at machine speed and by the time I'd reviewed what one agent had done, two others had made decisions I hadn't seen. I was nominally in charge of a process that was outrunning my ability to comprehend it.

That gap, between authority and understanding, is worth paying attention to. A traditional engineering manager can have a conversation with each team member, ask "why did you make that choice?" and get a real-time answer built on months of shared context. With agent teams, you're pattern-matching across parallel streams of machine-speed work, trying to spot problems in output you didn't watch being created.

It's not management. It's not code review. It's something closer to air traffic control: monitoring multiple fast-moving processes, intervening only when something looks wrong, trusting the system to handle the rest.

Trusting the system is the operative phrase. The economic logic of agent teams only works if you accept that you won't fully understand everything your team is doing. If you need to check and authorise everything as it happens, you slow the machine back down to human speed and eliminate its advantages. That bargain of not fully understanding is one that most engineering managers have already made with human teams, of course. But human teams work at human speed, which gives you time to build intuition about where problems hide. Agent teams don't offer that luxury.

The Other End of the Chain

Now turn to RentAHuman.ai, where something different is happening at the other end of the chain.

When an AI agent posts a task on that platform, books a human, and pays them in stablecoins, there's a question worth asking: who has the intent?

The AI does, in the immediate sense. It needs a document signed, a package collected, a location verified. But behind the AI sits another human who programmed it, configured it, gave it an objective. The chain between human intent and human execution now runs through a machine that decomposes the work, selects the worker, and manages the transaction. The person completing the task may have no idea what the person who set the AI in motion actually wants.

We've been moving in this direction for a while. Uber's algorithm already sits between someone who wants a ride and someone who drives. The driver doesn't choose the route or set the price. But at least they understand the basic transaction: someone wants to get from A to B.

When AI is the customer, even that transparency can disappear. You're completing a task for an agent. You might know what the task is but not why it matters, who it serves, or what larger objective it feeds into. You are, quite literally, a human API endpoint: called when needed, paid per task, disconnected from the meaning of the work.

This is something like what factory workers experienced when production lines separated them from the finished product and the end customer, except with an extra layer of abstraction. The factory worker at least knew they were making shirts. The gig worker hired by an AI agent might not even know that much.

Whoever decomposes a problem defines the work. Uber took "driving someone somewhere" and redefined it from a profession, with regulars, local knowledge, and professional identity, into a discrete, interchangeable task. That decomposition changed the driver's entire relationship to what they do.

When AI does the decomposition, it decides what the units of human labour are. Whoever defines the units of work shapes how the people doing that work understand themselves.

Two Mirrors

These two developments are mirrors of each other.

At the top of the chain, the human overseeing agent teams has authority but diminishing comprehension. At the bottom, the human hired by an AI agent has comprehension of their specific task but no view of the larger purpose. Both are experiencing the same phenomenon from different ends: the distance between intent and execution is growing, and the humans at each end can see less of the whole.

The productivity gains will be real and the equivalent of factory output soaring, of goods becoming cheaper, of entirely new economic possibilities opening up.

Just like last time, the question is who bears the cost of the transition and what protections the humans inside the system need.

This time, the hardest part of work may not be physical safety, but mental safety. We’ll be asked to move faster and faster, to rely on teams of AIs doing work we don’t fully understand, and yet to remain accountable for what they produce. That tension—delegating execution while retaining responsibility—creates real pressure.

There’s also a deeper disconnection: doing a small, discrete task without knowing what it’s for, or who it ultimately serves. And for workers displaced by AI, the future can be genuinely precarious, especially in systems where healthcare is tied to employment, leaving people to pay for coverage even when their income disappears.

The difficulties and pressures are real.

The Default Outcome

The instinct from one side will be to let it run, to argue that regulation stifles innovation, that the market will self-correct, that the gains will trickle down. The instinct from the other side will be to protect, to demand transparency about what AI agents are doing with human labour, to insist that efficiency isn't the only value worth optimising for.

Both instincts will be partially right and partially dangerous, which is exactly the situation we've been in before.

It's worth being honest about what the default outcome looks like, because we've seen that before too.

When factory productivity soared and regulation lagged, the gains didn't distribute themselves or trickle down. Instead, they concentrated. The Gilded Age produced Carnegie, Rockefeller, and Vanderbilt, wealth beyond anything the world had seen, while the workers whose labour generated that wealth lived in conditions that took decades of political upheaval to address. The productivity was real, the prosperity was real, but it accrued to the people who owned the system and not the people inside it.

We don't need a historical analogy to see this happening again. It's playing out in real time, with the modern equivalents being people like Elon Musk and Peter Thiel. These are people already wielding unprecedented concentrations of wealth and now positioning themselves at the centre of the AI infrastructure that will reorganise how everyone else works. The productivity gains flow first and fastest to the organisations that deploy them and the investors that fund them. When an energy company attributes a billion dollars in new revenue to AI-driven optimisation, that value isn't landing in the pay packets of the workers whose roles were reorganised to make it possible.

The humans at the execution end of the chain may have even less leverage than factory workers did. A factory floor at least put workers in the same physical space, where they could talk, organise, and collectively bargain. A gig worker hired by an AI agent through a digital marketplace is isolated by design. There is no factory floor, there is no break room, there is little shared experience of the system to organise around. There is a task, a payment and a disconnection.

If we do nothing, the most likely outcome isn't that the market self-corrects. It's that the people building and deploying these systems bank the gains and everyone else adjusts to whatever's left. Not because anyone planned it that way, but because that's what happens when productivity leaps forward and the institutional response is too slow to shape how the benefits are shared.

We should be cautious about expecting the beneficiaries to fix this themselves. Billionaires may pontificate about the need for a different settlement, but nothing about their motivations or politics suggests they are about to accept increased taxes to pay for displaced workers. Carnegie built libraries, but only after decades of crushing the workers who made him rich enough to afford them.

None of this is new. It's a description of what's already happened. The pattern is well established and the only variable is whether we recognise it early enough to do something different this time.

Circuit Breakers, Not Statute Books

The big difference this time is speed. Last time, the renegotiation between innovation and protection took roughly a century. Early textile mills in the 1790s eventually led to the post-war welfare state in the late 1940s. In between: child labour laws, trade unions, minimum wages and the creation of every major political institution we now take for granted.

We probably don't have a century this time and that creates a problem of urgency.

The traditional model of regulation, legislation debated over years, frozen into statute, amended only through further years of political negotiation, was designed for a world where the thing being regulated stayed roughly the same shape long enough for the law to catch up. Factory conditions in 1850 were grim, but they were recognisably similar to factory conditions in 1860, so you could write a law about working hours and expect it to remain relevant for decades.

That model breaks when the underlying technology shifts faster than the legislative cycle. By the time a committee has taken evidence on AI agents hiring humans for gig work, the agent frameworks will have changed, the platforms will have merged or multiplied, and the nature of the tasks being outsourced will look nothing like the examples in the consultation paper. Regulation that takes three years to negotiate and then sits unchanged for ten is the wrong instrument for a system that reinvents itself every few months.

The problem of regulating early is that you need to guess about what might happen, what problems might emerge and what protections might be needed. That's tough and leads to a great deal of speculation and overthinking. The honest answer is that we just don't know where this is headed, so writing down regulations now is probably futile and possibly dangerous because we'll be regulating for the wrong things.

The AI safety industry doesn't help here as much as it should. Too much of it is built around speculating about hypothetical risks, which has a fundamental flaw: it assumes we can reason our way to the right answer in advance. We can't. The problems that actually emerge from AI reorganising human labour will be messier, more specific, and more surprising than anything a think tank will anticipate. The useful response isn't to predict harder, it's to build systems that can react fast when reality tells us what went wrong.

What we probably need instead are regulatory systems, not regulations. Frameworks that can monitor, intervene, and adapt at something closer to the speed of the systems they oversee. Something closer to how financial regulators operate, with the authority to act when they see harm emerging, rather than writing a position paper over months and waiting for parliament to schedule a debate. Less like planning law and more like a circuit breaker.

In practice that means short deadlines and reversible actions: mandatory incident reporting within 24–72 hours for serious harms; time-boxed “kill switch” powers that allow temporary feature disablement while facts are established and mitigations deployed; and regular “risk weather reports” that summarise what’s being seen, what’s changing, and where scrutiny should focus—paired with public comment that actually shapes what happens next. It also means pricing trust: lighter-touch supervision for organisations that can demonstrate robust controls and responsible behaviour, and escalating scrutiny for those that can’t. And it gets political. AI businesses are global, so the response has to be coordinated across borders; otherwise power just relocates into the hands of whoever is most willing to lean on “might makes right.”

This won't be easy. Fast-moving regulatory systems bring their own risks, principally overreach and capture by incumbents. But the alternative of either speculating about the wrong risks or waiting until the harms are obvious and entrenched before beginning a multi-year legislative process, is how you end up a century behind the problem.

There are hopeful signs. When Elon Musk allowed his Grok AI to undress images of real people, the reaction was swift. Politicians and regulators in multiple countries complained loudly and threatened intervention. That was despite concerns about a possible backlash from an American government often willing to turn regulatory threats into international incidents. This time Musk backed off, hinting that when the determination and outrage is strong enough, governments can influence the commercial operations of companies in a different jurisdiction.

However, we also got a hint at tensions to come. In the UK, politicians demanded an immediate investigation and response from the regulator, Ofcom, who would ordinarily have taken months to even begin an investigation.

This tension between innovation and protection isn't a problem that can be solved. There is no magic answer and both left and right are as correct as they are incorrect. This is a negotiation that never ends. We would do well to acknowledge that there is not a magic answer, because that immediately disarms those claiming they have that answer. Nobody has the answer and those claiming they do probably have an ulterior motive.

The speed of the AI shift means we can't negotiate the way we always have. The systems we've traditionally used to manage risk, parliamentary committees, multi-year consultations, statute law, were designed for a world that moved slowly enough to study before responding. We need mechanisms that can react to problems as they emerge, not guess about them years in advance.

The infrastructure around AI is still being built. Right now, the concrete is still wet, but that won't last.

Barnacle Labs
Barnacle_Labs

AI for breakthroughs, not buzzwords.

34 Tavistock Street, Covent Garden, London WC2E 7PB

Google Cloud Partner
  • Barnacle Labs Ltd. England & Wales.
  • Company No. 14427097
  • © 2026 Barnacle Labs Ltd.