Back to Posts
Industry Analysis

Humans Need Not Apply: 11 Years Later

Jerome Leclanche
2025-08-24
Humans Need Not Apply: 11 Years Later

Introduction

In 2014 CGP Grey published the short documentary "Humans Need Not Apply", which made a simple claim with a hard edge: automation had moved from muscles into minds, and that would reshape work. Eleven years later the premise has not been disproved. Robots and software have automated tasks that used to require hands and heads. Some of the video's images remain accurate. Others have shifted shape in ways the original piece did not fully anticipate.

Why this mattered

I first watched CGP Grey's video before I had an idea for Ingram. It felt less like entertainment and more like a personal hand being pushed on a door. The video changed my view of what work means and where I wanted to spend my time. It shifted my curiosity from building clever demo systems to asking how automation rewrites institutions: teams, training, accountability and safety.

That impulse was a thread through the founding of Ingram Technologies in 2022. We were not founded to chase novelty. We were founded to study how capabilities interact with messy real systems and to build tools that are transparent about their limits. You can see that position on our About page.

This post is a practical inventory from the perspective of people who both research and ship systems.

What proved robust, and what changed shape

The deep pattern the video pointed to still holds: automation expands from isolated, expensive machinery to cheaper, more general systems that remove previously human tasks. Warehouse robotics, algorithmic trading, automated document review and generative text tools are all concrete instances of that pattern.

Where the story changed is in the specifics and the pace. A few notes from the last decade:

  • Warehouse robotics moved from pilot to baseline. Amazon's adoption of Kiva-style mobile robotics was not a curiosity. It became infrastructure. The effect is plainly visible in logistics and fulfillment.

  • Early, charismatic prototypes did not automatically win. Rethink Robotics popularized the image of friendly, adaptable cobots, but the company struggled to scale commercially and closed in 2018. The idea lived on and matured, but the vendor landscape changed.

  • Large language models changed how quickly cognitive tasks can be automated. GPT-3 in 2020 and the public arrival of ChatGPT in late 2022 made text generation, summarization and dialog-driven automation widely accessible. That altered expectations and accelerated adoption in many office workflows.

  • Autonomous vehicles remain transformative, but the path to broad deployment is bumpy. Early commercial services exist, for example Waymo's limited Waymo One service in Phoenix, yet safety incidents and regulatory scrutiny have repeatedly slowed rollouts. The technical capability alone did not guarantee fast, universal job replacement.

Those examples show a pattern: capability matters, but deployment context, regulation and economics determine how and how fast work changes.

How this influences what we build and study

At Ingram we treat automation as an engineering problem and as a civic problem. That affects what we prioritize.

First, we design systems for failure. The rare, surprising failure is what destroys trust. We build observability, rollback and human-in-the-loop controls into systems early, not after deployment.

Second, we separate task from judgment. Most roles are bundles of tasks. Some tasks are low risk, repetitive, and safe to automate. Others require pattern recognition across context, or ethical judgment. We automate the former and design the handoff to humans for the latter.

Third, we measure operational risk, not only lab accuracy. Benchmarks are useful, but they hide distribution shifts, adversarial inputs and nonstationary data. We evaluate models in live settings and create clear playbooks for incidents.

All three points are engineering disciplines. They matter precisely because the public conversation now too often treats automation as either magical salvation or apocalyptic inevitability. It is neither. It is a tool with costs and benefits that show up in teams and towns, not only in research papers.

The hardest unresolved problem we work on: turning juniors into seniors

This is the one point I want to emphasize. When machines do significant parts of the skilled work, how do people still learn the craft?

Historically, skill progressed by doing, by watching experienced people, by slow feedback loops and by apprenticeship. Today many of those feedback loops are replaced by tools that supply answers. That is good for throughput. It is bad for forming deep judgement.

We see a few specific failure modes.

  • Shortcut learning. New practitioners learn to prompt and to accept model outputs without understanding the underlying trade offs or failure modes.
  • Feedback scarcity. Tools reduce the number of rough examples a junior must survive to reach competence. Less exposure means fewer opportunities to internalize corner cases.
  • Task atrophy. As tools absorb basic technique, people stop practicing fundamental subskills that senior practitioners rely on when things break.

To move people from junior to senior we need a different set of practices. Here are the ones we are exploring and shipping at Ingram:

  1. Scaffolded autonomy. Start juniors on constrained tasks where models can provide immediate feedback. Gradually widen scope as they demonstrate understanding of failure modes. The scaffolding is explicit, not implicit.
  2. Machine-in-the-loop apprenticeship. Use models as mentors that produce candidate solutions plus provenance and reasoning traces. Require humans to critique, correct and explain why they accept or reject the model output. The key is requiring explicit justification from the human.
  3. Curriculum that targets meta-skills. Teach how to decompose problems, how to design verification tests, how to read model outputs critically, and how to trace errors to data issues. These are not coding drills. They are practices in judgement.
  4. Rotations and exposure. Junior staff need exposure to failure. We rotate people through operations, observability, model retraining and research. Seeing the entire lifecycle accelerates judgement.
  5. Credentialed progression with evidence. Progression should be based on artifacts: incident postmortems, reproducible fixes, and teaching others. This turns tacit skill into assessable evidence.

We do not claim to have a complete recipe. This is research and product co-developed with partners. The problem is inherently social and technical, and it will be a long-running part of the work we do.

Practical advice for teams and policy makers

For teams

  • Map tasks, not jobs. Identify high-value tasks to automate and preserve tasks that require contextual judgement.
  • Build guardrails early. Treat observability and incident response as first-order features.
  • Invest in learning pathways that require critique rather than passive consumption of model outputs. A junior who can explain why a generated answer is wrong is far more valuable than a junior who can produce a prompt.

For policy makers

  • Fund training as an investment in local resilience, not as charity. Programs that teach meta-skills and create apprenticeship pipelines will make transitions manageable.
  • Require incident reporting for safety critical automation so regulators can aggregate data on failure modes and coordinate responses.

Parting thoughts & key milestones

The core insight of the short has held up: automation now targets cognitive work in addition to physical tasks. But the shape of the change is not uniform. The question for the next decade is less whether machines can do more. It is how societies and organisations design learning, accountability and opportunity around those machines.

2012March

Amazon acquires Kiva Systems

Amazon announces acquisition of Kiva Systems for about $775 million. This accelerated warehouse automation and set the tone for robotic fulfillment at scale.

201413 August

CGP Grey publishes 'Humans Need Not Apply'

The short film on YouTube became a widely cited popular framing of automation moving into cognitive work.

2018October

Rethink Robotics shuts down

Maker of Baxter and Sawyer collaborative robots closes. The company had popularised the idea of adaptable, human-friendly cobots but faced commercial scaling challenges.

2018December

Waymo launches Waymo One

Limited commercial robotaxi service in Phoenix marks one of the earliest public-facing commercial deployments of autonomous passenger service.

2020June

OpenAI publishes GPT-3

Large language model with 175 billion parameters. GPT-3 was a major leap in accessible, general-purpose text generation capabilities.

2022January

IBM sells Watson Health assets

IBM announces sale of many Watson Health assets to Francisco Partners, marking a reorientation of early 'doctor bot' ambitions.

202230 November

ChatGPT launches

OpenAI launches ChatGPT as a public research preview. ChatGPT rapidly popularised conversational generative models and accelerated experimentation across industries.

2023March

GPT-4 released

Becomes broadly available to developers, bringing improvements in reasoning, context length, and capability. Expanded the range of plausibly automatable cognitive tasks.

2024September

Cruise fined for safety incident

Cruise fined $1.5 million for failing to report robotaxi incident involving pedestrian, highlighting how governance and transparency shape deployment.

2025May

Claude Code is Generally Available

Claude Code reaches general availability, propelling coding agents into the next generation. This marks a watershed moment as AI coding assistants transition from experimental tools to production-ready development partners.

Timeline Legend:

Acquisition
Key Milestone
Company Closure
Product Launch
AI Development
Regulation