Back to Posts
Industry Analysis

The Iceberg Index: What We're Missing About AI and Work

Jerome Leclanche
2026-04-12
The Iceberg Index: What We're Missing About AI and Work

If you've read anything at all about AI and jobs in the last couple of years, you already know the script. Programmers are getting laid off. Junior developers can't get a first gig. A handful of copilots are quietly doing the work of a dozen entry-level engineers. The anxiety is loud, it's in every headline, and honestly, a lot of it is fair.

But a recent MIT paper, The Iceberg Index: Measuring Skills-centered Exposure in the AI Economy, makes a case I've been waiting for someone to make properly: the entire conversation is pointed at the wrong part of the economy. Not because the tech-sector disruption isn't real — it is — but because it's only a small piece of what's actually happening. The rest is sitting, unmeasured, under the waterline.

Jobs are the wrong unit

The first problem is conceptual, and it's one I think about a lot when clients ask me what AI will "do" to their workforce. AI doesn't really replace jobs. It replaces tasks inside jobs. A lawyer doesn't vanish; the hours she spends reviewing boilerplate contracts quietly shrink. A financial analyst still comes in on Monday; the morning she used to spend cleaning data and summarising filings compresses into twenty minutes.

That sounds like a softer version of the same story. It isn't. Our whole apparatus for measuring work — GDP, unemployment figures, payroll surveys, wage data — was built to count jobs and people. It does that reasonably well. What it cannot do, and was never designed to do, is look inside a job and ask which parts of it an AI system can already technically perform. So the change doesn't show up where we're looking for it. By the time it appears in the official statistics, the transition is already well underway, and whatever workforce programmes have been commissioned are aimed at yesterday's problem.

The MIT team puts it more pointedly: every major economic transition has eventually forced economists to build new instruments. The industrial era needed output per hour, because nobody had a way to capture physical productivity at scale. The internet era broke GDP so badly that the US Bureau of Economic Analysis had to construct a separate Digital Economy Satellite Account just to account for things like Wikipedia, which replaced a paid encyclopaedia market with something that was enormously valuable but invisible to the national accounts. The AI era, they argue, needs the same kind of instrument, because intelligence itself is now a shared input between humans and machines.

What the Iceberg Index actually is

The Iceberg Index is MIT's attempt to build that instrument. It's not a forecast of which jobs will disappear. It's a map — a skills-level map of where AI capabilities and human work already overlap, weighted by the economic value of that work.

The methodology is worth walking through, because the design choices matter. The authors build a digital representation of 151 million American workers spread across 923 occupations and 3,000 counties. To describe each occupation, they lean on O*NET, the US Department of Labor's skills taxonomy, which breaks each job down into its actual component skills — analysing data, critical thinking, coordinating with others, programming, interacting with computers — along with importance ratings and difficulty levels drawn from surveys of real workers. So rather than saying "there are four million accountants in America", the representation says: here are the thirty-seven specific skills accounting work requires, here is how important each one is, and here is how hard each one is to perform.

They then do the same exercise for AI, cataloguing more than 13,000 real, production-ready AI tools currently deployed inside companies — copilots, document processors, financial-analysis software, workflow automations — and crucially, they map each one through the same O*NET taxonomy. So you end up with a profile of what each tool can do, expressed in exactly the same language as the humans.

For the first time, you get a genuinely apples-to-apples comparison: for any given occupation, what share of the wage value of that work can an AI system already technically perform?

That "wage value" phrase is the most important design choice in the whole paper, and it's easy to miss. Rather than asking how many accountants might lose their jobs, they're asking how much of the economic value accountants produce comes from skills AI can already perform. An accountant might spend 60% of their time on routine document processing and 40% on judgement calls and client relationships. Those are very different things, and treating them as equivalent — counting them as 60% of "an accountant" — would tell you almost nothing useful. Weighting by wage value means the index reflects where the real economic exposure sits, not just where the hours are.

A few things the index deliberately does not do, because I see these conflated constantly. It excludes physical robotics; this is purely about digital and cognitive AI. It measures technical capability, not outcomes — it doesn't predict job losses, doesn't forecast adoption timelines, and says nothing about what firms will actually choose to do or what regulators will allow. The authors are explicit about the right mental model: think of it less like a weather forecast and more like an earthquake risk map. It tells you which buildings are sitting on a fault line. It does not tell you when the tremor will come.

The tip, and the mass below it

Now to the actual findings, which I'll admit stopped me mid-coffee the first time I read them.

If you measure the wage value of tech-sector work that AI can technically perform, you get roughly 2.2% of the entire US labour market, or about $211 billion across 1.9 million workers. This is what the paper calls the Surface Index, and it lines up well with everything we already know: more than 100,000 job losses linked to AI restructuring in 2025 alone, AI systems now producing more code per day than every human developer on the planet combined, coastal tech hubs leading the adoption curve. Washington, Virginia and California sit at the top of this ranking, as you'd expect.

The problem is that 2.2% is a misleading place to stop. The capabilities driving that number — document processing, routine analysis, structured data handling, summarisation, classification — don't belong to the tech sector. They are skills that appear in hundreds of occupations that would never show up in a headline about AI layoffs. The same capabilities that make a coding assistant useful to a software engineer overlap heavily with the daily work of a financial analyst, an HR coordinator, an insurance claims processor, a legal secretary, a loan officer, a policy analyst.

When you run the same methodology across the whole economy rather than just the tech sector, the number jumps from 2.2% to 11.7% — roughly $1.2 trillion in wage value. Five times larger. That is the Iceberg Index proper, and the name turns out to be exactly the right metaphor: the visible disruption in tech is the tip sticking out of the water; the mass underneath is four times bigger and nearly invisible to the instruments policymakers are currently using.

And the people sitting on that underwater mass are not who the headlines have trained us to expect. They are, disproportionately, highly educated, well-paid professionals whose working day is built around reading, writing, analysing and summarising information — the people who, by any reasonable measure, did everything society told them to do, and did it well.

The gap between "can" and "does"

Before anyone panics, there's a second finding in the paper that's just as important. The gap between what AI can technically do and what it's actually doing in practice is still enormous.

For computer and math workers, the paper estimates AI is theoretically capable of handling somewhere around 94% of the tasks in the occupation. In observed professional usage, it's doing roughly a third. Similar patterns show up in legal work, in architecture and engineering, across a dozen other fields. The capability is there; what's holding the exposure back is regulation, integration friction, organisational habit, and the very reasonable insistence that a human check the AI's output before it goes out the door.

These are friction points, not permanent walls. They will erode as the technology matures, as compliance frameworks catch up, as integration improves, and as organisations get more comfortable with what "good enough" looks like. The Iceberg Index is essentially the ceiling those frictions are holding down. It doesn't tell you when the ceiling gives way. It tells you how much room there is underneath.

Geography surprises

If I asked you which US states are most exposed to AI, you'd probably say California, Washington, New York. Reasonable guess. It's also wrong.

The Iceberg Index's state-level ranking puts South Dakota, North Carolina and Utah above California and Virginia. Delaware too. These are not the states writing anxious op-eds about AI. They are, however, states whose economies happen to be concentrated in administrative, financial and back-office work — which is exactly where the sub-surface exposure sits. California's workforce is diversified enough that the exposure spreads thin. A state built on finance and administration has nowhere for it to hide.

Tennessee is probably the starkest example in the paper. Its Surface Index — the tech-sector exposure — is 1.3%. Nothing that would set off alarm bells in any standard workforce-planning model. But its Iceberg Index is 11.6%, nearly ten times higher. The white-collar workforce keeping Tennessee's factories running is far more exposed than the tech sector everyone's been watching. Ohio and Michigan tell the same story: states that have spent years preparing for robots to arrive on factory floors, while the white-collar disruption is quietly arriving first, by a different door.

The deeper finding sits underneath this. Traditional economic metrics — GDP, per-capita income, unemployment — explain less than 5% of the variation in Iceberg Index scores across states. In some of the regressions, the relationship even flips. The states that look safest by the conventional measures aren't necessarily the least exposed, and the ones that look vulnerable aren't necessarily the most at risk. The tools policymakers are using can barely see the risk they're trying to manage. Billions are being committed to workforce preparation programmes that may be aimed, systematically, at the wrong places.

So what do we do with this?

I want to be careful not to overclaim what the paper says, because the authors are themselves careful. The Iceberg Index does not predict displacement. It does not say when the capability will become practice. It does not account for new jobs that the same technology will create, and those will be real — every previous transition of this kind has generated new roles alongside the disruption, and there's no reason this one won't. The index is a capability map, not a prophecy.

But capability maps are genuinely useful, in a way that the usual discourse around AI and jobs mostly isn't. Here is what I take away, and what I'll be telling the companies we work with at Ingram when this comes up:

The exposure is in the middle of the org chart, not the edges. If your AI-readiness conversation is mostly about what the engineering team is using Copilot for, you're looking at 2% of the problem. The other 80% of the exposure lives in finance, operations, HR, compliance, legal ops, customer support, internal analytics — the administrative and cognitive backbone of the business. That's where the tasks-not-jobs reshaping is going to be most visible first, and where the governance questions get interesting.

Standard workforce-planning metrics will tell you the wrong story. If you're benchmarking AI exposure against unemployment rates or regional tech-employment share, you will systematically underestimate it in exactly the places where it matters most. This is the entire argument for why a skills-level instrument was needed in the first place.

"Technical capability" and "real impact" are not the same thing — yet. The gap between them is currently wide, and that gap is where all the interesting decisions live. How fast a firm moves across it is not a function of what the model can do; it's a function of adoption strategy, regulatory posture, integration investment, quality thresholds, and how comfortable the organisation is with handing over tasks it has historically handed to junior humans. These are management questions, not AI questions, and they are the ones worth spending time on.

And for anyone doing ISO 42001 work, or thinking about EU AI Act readiness: this is the map you want on the wall. Not because it tells you where to deploy, but because it tells you where the governance pressure is actually going to land. Responsible deployment is a lot easier when you know which skills and which workflows are exposed, and which aren't, across your organisation — and when you can have that conversation with your people before the roles start shifting, rather than afterwards.

The honest summary is that the map we've been using was drawn for a different economy. The Iceberg Index is a serious attempt to draw a better one. Whether governments, companies or individual workers actually use it is the question the paper can't answer. But if they don't, the cost isn't an abstract one. It's a workforce transition run blind, at speed, with the instruments pointing the wrong way — and that's a transition we've seen before, and it never ends well for the people it happens to.

Better to know what's under the water.