Every few months, a new study ranks the jobs most likely to be destroyed by AI. Most of these analyses share the same flaw: they measure what AI could theoretically do, not what it is actually doing in real workplaces today. And even those that close that gap, as Anthropic did in their landmark March 2026 paper, stop short of accounting for the structural constraints that govern whether automation can be deployed in practice. That is the work this analysis sets out to do.
On March 5, 2026, Anthropic published something different. In a paper titled “Labor Market Impacts of AI: A New Measure and Early Evidence,” economists Maxim Massenkoff and Peter McCrory introduced what they call an observed exposure index, a framework built from anonymized Claude usage data across professional settings, cross-referenced against the U.S. Department of Labor’s occupational task database covering 800 job categories.1 The result is the clearest picture we have yet of where AI automation is actually landing.
Below, we apply the ACAIS Structural Displacement Framework to four professions that all score high on theoretical AI exposure, and find very different conclusions depending on the institutional context in which each role operates.
The Method That Changes the Conversation
Most previous analyses, including the much-cited Oxford study estimating 47% of U.S. jobs at risk of computerization and the WEF Future of Jobs report projecting 92 million displaced roles by 2030, rely on task classification: researchers list what a job requires, then assess whether AI could theoretically handle each task.2,3 It is a reasonable starting point, but it systematically overstates near-term risk because it ignores whether the technology is actually being deployed at scale in that context.
Anthropic’s approach is different. Their index layers two signals on top of each other. The first is standard theoretical feasibility: can a large language model perform this task? The second is observed reality: are users of Claude actually using it to perform this task in an automated, work-related context, at meaningful scale? A job’s exposure score rises when both signals align. When only the theoretical one fires, the job is logged as potential exposure, not current risk.
“By laying this groundwork now, before meaningful effects have emerged, we hope future findings will more reliably identify economic disruption than post-hoc analyses.” Massenkoff & McCrory, Anthropic Research, March 2026
The researchers are explicit about intellectual humility here: a previous major study on job offshorability identified roughly a quarter of U.S. jobs as vulnerable, and a decade later, most of those jobs showed healthy employment growth. They are not predicting doom. They are building a measuring instrument so that if disruption accelerates, it can be detected early rather than rationalized after the fact.
The Chart That Matters: Theory vs. Reality
The most important visualization in the Anthropic paper compares two numbers for each occupational category: the theoretical percentage of tasks AI could perform, and the actual percentage being performed today. The distance between the two bars is where the risk lives. It represents untapped automation that could close as models improve, costs fall, and organizational inertia erodes.
Source: Anthropic, “Labor Market Impacts of AI: A New Measure and Early Evidence,” March 2026. Reconstructed by ACAIS from published figures.
The distance between the two bars in computer and math occupations is 61 points: 94% theoretically feasible, 33% actually occurring. That gap is not a comfort. It is a countdown. What is currently protecting it is a combination of legal constraints, integration costs, workflow inertia, and the fact that humans still need to review AI outputs in most regulated environments. These are not permanent barriers.
The Jobs Genuinely at Risk Right Now
Where both bars converge, with high theoretical coverage and high observed usage, is where real displacement pressure exists today. Anthropic’s data is consistent with what Microsoft found in their 200,000-conversation Copilot study from late 2024, and with real labor market signals: customer service employment in the U.S. declined by approximately 80,000 positions between 2022 and 2024.4
- Computer programmers 75% coverage
- Data entry & records clerks ~70%
- Customer service representatives high
- Proofreaders & copy editors high
- Translators & interpreters 98% theoretical
- Medical transcriptionists 99% automated
- Telemarketers high
- Junior software developers rising
- Financial analysts (entry-level) rising
- Marketing content managers rising
- Web developers rising
- PR & communications managers moderate
- Paralegals & legal researchers moderate
- Recruitment coordinators moderate
- Nurses & nurse practitioners low
- Construction & skilled trades low
- Secondary school teachers low
- Farmers & agricultural workers low
- Mechanics & repair technicians low
- Bartenders, cooks, dishwashers ~0%
- Lifeguards & physical roles ~0%
One finding from the Anthropic study deserves more attention than it has received in the press: the workers most exposed to AI tend to be older, female, more educated, and better paid.1 This is not the automation of factory floors or warehouse jobs. It is the automation of professional knowledge work, the graduate-degree economy. The Microsoft study corroborates this: interpreters, historians, writers, and financial professionals dominate its top-40 exposed list.5
Anthropic finds suggestive evidence that hiring of workers aged 22 to 25 has slowed in high-exposure occupations, a pattern reinforced by a separate study showing a 16% fall in employment among this age group in AI-exposed roles.1 The unemployment rate for senior workers in these fields has not meaningfully risen. What is happening is a quiet contraction of the entry point into professional careers: the junior roles that used to exist as training grounds are being absorbed by AI before they are ever posted.
Where the Headlines Are Getting It Wrong
Task-level exposure scores are a useful starting point, but they systematically miss a critical dimension: the degree to which a role’s value derives not from the cognitive tasks it performs, but from the human interactions, institutional relationships, and accountability structures it sits inside. A compliance officer in a bank is not just performing text analysis and document review, both tasks AI handles well. They are signing off on decisions that carry personal legal liability, managing relationships with regulators who insist on speaking to a named human being, and navigating internal political dynamics that no language model can be made accountable for.
The ACAIS Structural Displacement Framework: Four Factors That Override Raw Task Exposure
When assessing actual near-term risk to a specific role, task automation potential is only one variable. These four structural factors can significantly reduce real-world displacement pressure even when theoretical AI coverage is high. They form the basis of the ACAIS adjusted scoring applied below.
-
Mandatory human interaction density Roles requiring sustained, multi-stakeholder human interaction as a core deliverable, not just as a procedural step, are far harder to automate than task lists suggest. Client-facing relationship managers, senior advisors, and regulatory liaisons fall into this category. The interaction is the product.
-
Regulatory accountability and data privacy constraints In heavily regulated environments like banking, insurance, healthcare, and law, the ability to deploy AI on sensitive data is constrained by GDPR, DORA, HIPAA, MiFID II, and equivalent frameworks. A compliance officer processing client data through an external LLM creates a data governance breach before it creates any efficiency gain. On-premise AI and sovereign model options exist but are expensive and not widely deployed yet, which creates a structural delay in automation for any role touching protected data classes.
-
Legal liability and signatory accountability Certain roles exist precisely because a human name must appear on a document and bear responsibility for its content. No organization can currently make an AI model legally liable for a flawed compliance assessment, a miscoded financial product, or a faulty audit opinion. Until the legal framework catches up, which may take a decade or longer, these roles retain structural protection regardless of how well AI performs the underlying analysis.
-
Institutional trust and relationship capital Some roles are gatekeepers to relationships that took years to build and that counterparties will not transfer to an automated system. A senior fund manager, a long-standing audit partner, or a private banking relationship manager holds value that is fundamentally non-replicable by an AI tool, at least within the timeframes most disruption studies are modeling.
Three Roles Worth Examining Closely
All four roles below score above 80% on theoretical AI task exposure, placing them firmly in the ‘high risk’ category in most published rankings. The ACAIS Structural Displacement Score adjusts that number downward based on four weighted factors: regulatory data constraints, legal accountability requirements, mandatory human interaction density, and institutional relationship capital. The divergence between the two scores is where the real analysis lies.
ACAIS displacement assessments reflect near-term probability of role elimination over the stated horizon, based on our analysis of agentic AI deployment readiness, regulatory data constraints, legal accountability structures, and mandatory human interaction density. External data cited where applicable.
Bank Compliance Officer
On paper, this role looks exposed. Compliance work involves large volumes of document review, regulatory text analysis, policy cross-referencing, and report generation. These are all tasks where current LLMs perform well. Theoretical AI coverage for the role is high. In practice, however, a compliance officer in a European bank is working with data classes that cannot be sent to external models under GDPR without explicit data processing agreements most AI providers do not offer at the required granularity. Their outputs carry personal liability under the EU Senior Managers Regime. Their interactions with the ECB or local supervisors are expected to involve a named human. And the organizational dynamics of compliance, specifically managing conflict between the front office and the control function, require political judgment that is entirely outside the scope of language modeling.
AI will change this role. Document review will be faster, initial risk assessments will be assisted, reporting will be partially automated. But wholesale displacement at the senior-to-mid level is structurally constrained for the foreseeable future.
Verdict: Augmented, not replaced. Medium-term horizon.Back & Middle Office in Financial Services
This is the role the market most consistently miscategorizes. Theoretical AI task exposure for back and middle office financial roles appears high in aggregate indices because the job description, on paper, contains large volumes of data processing, reconciliation, and reporting. That reading misses the institutional reality of where this work actually happens.
The data that back and middle office roles operate on is among the most tightly controlled in any regulated sector. Trade data, counterparty exposure, margin calculations, and client positions are proprietary, often classified as inside information, and subject to strict data governance requirements under MiFID II, DORA, and Basel IV. Routing this data through any external AI model is not a discretionary choice for a compliance-conscious institution: it is a regulatory breach. The argument that on-premise AI will solve this is valid in principle, but the deployment reality in tier-one financial institutions is still measured in years, not quarters.
The more accurate profile for this role is AI Assist rather than AI Replace. The mid-level risk or controls professional is more likely to see their workflow augmented: reconciliation exceptions flagged automatically, exposure dashboards updated in real time, regulatory reports pre-populated. The role evolves toward oversight and exception handling rather than disappearing. Pure processing functions at the most junior level face genuine attrition, but that process has been underway since the pre-generative-AI automation wave and is not a new development driven by LLMs specifically.
Verdict: AI Assist profile. Augmentation is already underway at the senior level. Junior processing roles face gradual attrition, not sudden displacement.Junior Software Developer
This is where the Anthropic data is most alarming, and most credibly so. Computer programmers show 75% observed AI task coverage. Not theoretical. Observed. The hiring data for 22-to-25-year-olds in tech-adjacent roles shows a measurable deceleration. Goldman Sachs reports that unemployment among young workers in tech-exposed occupations has risen nearly 3 percentage points since the start of 2025.6 The “learn to code” career safety narrative of the 2010s has collapsed: CS graduates now face 6.1% unemployment versus 3.2% for philosophy majors.7
The role is not disappearing entirely. Senior engineers who can architect systems, define requirements, review AI-generated code, and make high-stakes technical decisions remain valuable. What is disappearing is the junior entry path that used to be how those senior engineers were created. That is a pipeline problem with consequences that will take years to manifest fully.
Verdict: Entry-level genuinely at risk. Restructuring is underway now.The Non-Litigation Lawyer
The distinction between litigation and non-litigation legal work is the most important line to draw in any AI displacement analysis of the legal profession. Litigators, those who argue in court, conduct cross-examinations, and manage adversarial oral proceedings, operate in an institutional context that remains structurally human: judges require human counsel, and the strategic dimension of courtroom performance is not a task that maps onto current AI capabilities in any near-term sense.
The picture for non-contentious legal work is fundamentally different, and the arrival of agentic AI changes the calculus entirely. Until recently, one could argue that document drafting, legal research, and due diligence were tasks AI assisted with rather than replaced. That argument is no longer credible. Agentic legal AI systems are now capable of running a full M&A due diligence workflow autonomously: ingesting data rooms, flagging material risks, drafting summaries, cross-referencing against regulatory databases, and producing a structured legal opinion, without a human reviewing each step. What used to require a team of junior and mid-level associates working for weeks can be completed overnight at a fraction of the cost.
The attorney-client privilege argument provided temporary cover. The reasoning was that routing client work product through a third-party AI model could constitute a waiver of privilege. That constraint is eroding fast. Enterprise-grade on-premise deployments and air-gapped legal AI models are now available from several providers, removing the data residency objection. The institutional barrier is gone before the legal framework has had time to respond.
What remains protected is narrow: the senior partner with a 20-year client relationship, the M&A counsel who gets called before a deal is even structured, the lawyer whose value is judgment under ambiguity and not document production. Below that level, the role as it has existed for the past 30 years is under genuine structural threat. Law firms are already responding: several Magic Circle and AmLaw 100 firms have publicly reduced associate intake since 2024, and the internal rationale is not cost-cutting in the traditional sense. It is that the associate layer is being absorbed by tooling.
Verdict: High displacement risk for non-contentious legal work. Agentic AI has already closed the capability gap. The remaining protections are institutional, not technical, and they are narrowing.The Macro Picture: Destruction and Creation Running in Parallel
The WEF Future of Jobs Report 2025, drawing from over 1,000 employers across 55 economies representing 14 million workers, projects 92 million jobs displaced and 170 million created by 2030, a net gain of 78 million roles.3 The number that matters more than either figure is the 22% churn rate: roughly one in five jobs will be fundamentally transformed within five years. The economy as a whole does not shrink, but the individuals in the wrong roles at the wrong moment do not automatically benefit from the net gain.
The IMF estimates 300 million full-time jobs globally affected by AI-related automation, though it distinguishes carefully between automatable tasks, augmentable tasks, and unaffected work.8 The PwC view adds a useful time-phasing dimension: in the first wave, women are at greater relative risk due to concentration in administrative functions; in later waves, male-dominated physical and transportation roles become more exposed as robotics and autonomous systems mature.9
The Anthropic paper names it explicitly: a “Great Recession for white-collar workers.” During 2007–2009, U.S. unemployment doubled from 5% to 10%. A comparable doubling in the top quartile of AI-exposed occupations, from 3% to 6%, would be detectable in their framework and would represent a significant economic dislocation concentrated entirely in the professional class. It has not happened yet. The framework exists precisely to detect it if it begins.1
Goldman Sachs offers the most conservative near-term estimate: if current AI use cases were extended proportionally across the economy, roughly 2.5% of U.S. employment would be at risk of displacement today.6 That figure will rise as capabilities expand and adoption deepens. It is a useful corrective to studies that treat maximum theoretical exposure as current risk.
What This Means in Practice
The WEF estimates that 39% of existing skill sets will become outdated between 2025 and 2030, and that 85% of employers plan to prioritize internal upskilling to address the transition.3 These are institutional commitments, but the practical burden falls on individuals. The skills that compound rather than compete with AI capabilities, things like systems thinking, cross-functional judgment, regulatory fluency, stakeholder management, and the ability to define and evaluate rather than just execute, are precisely those that task-automation models underweight.
The honest read of the Anthropic study is this: the tsunami is real, the water is receding from shore, and the time to move to higher ground is now. You have more time than the most panicked headlines suggest, and the geography of risk is more uneven than any single ranking can capture. Your theoretical task-exposure score matters. Your institutional context, your data environment, your accountability structure, and the irreplaceability of your human relationships matter equally.
The Anthropic index will be updated regularly. It is worth watching. It is the first instrument credible enough to tell us when the gap between the blue bars and the red bars starts closing at speed.
Sources
- Massenkoff, M. & McCrory, P. — Labor Market Impacts of AI: A New Measure and Early Evidence. Anthropic Research, March 5, 2026. anthropic.com/research
- Frey, C.B. & Osborne, M.A. — The Future of Employment: How Susceptible Are Jobs to Computerisation? Oxford University, 2013.
- World Economic Forum — Future of Jobs Report 2025. January 2025. weforum.org
- Site Selection Group — U.S. customer service employment data, 2022–2024.
- Tomlinson, K. et al. — Working with AI: Measuring the Occupational Implications of Generative AI. Microsoft Research, arXiv, July 2025.
- Briggs, J. & Dong, S. — How Will AI Affect the Global Workforce? Goldman Sachs Research, August 2025.
- Prestianni, T. — 59 AI Job Statistics: Future of U.S. Jobs. National University, May 2025. nu.edu
- International Monetary Fund — Gen-AI: Artificial Intelligence and the Future of Work. Staff Discussion Note, 2024.
- PwC — Will robots really steal our jobs? UK Economic Outlook, 2018 (phasing model referenced in 2025 updates).
- Challenger, Gray & Christmas — U.S. Layoff Report, 2025. As reported by Reuters and AP.