
Sign in to join the discussion.
The Harvard Business School working paper "Displacement or Complementarity? The Labor Market Impact of Generative AI" is the most comprehensive empirical accounting of AI's effect on hiring to date. Authored by HBS Professor Suraj Srinivasan alongside Wilbur Xinyuan Chen of Hong Kong University of Science and Technology and Saleh Zakerinia of Ohio State University, it draws on a dataset covering nearly all U.S. job vacancies from 2019 through March 2025 - six years of posting data spanning the pre- and post-ChatGPT eras.[1]
The paper's central finding is that generative AI is doing two things simultaneously, at different ends of the occupational spectrum. For roles in the top quartile of automation potential - structured, repetitive cognitive work - job postings fell 13% per quarter per firm following the widespread adoption of generative AI tools. For roles in the top quartile of augmentation potential - work requiring human judgment, interpersonal skill, and domain expertise alongside AI - demand grew 20%. Finance and technology were the sectors registering the steepest declines.[2]
The distinction between automation-prone and augmentation-prone roles is the methodological core of the paper, and it deserves scrutiny. The research team used OpenAI's ChatGPT to classify over 19,000 job tasks across more than 900 occupations, assessing each task's theoretical susceptibility to generative AI. Roles were then assigned an augmentation score based on the ratio of AI-exposed tasks to tasks requiring irreplaceable human involvement.[1]
Microbiologists, financial analysts, and clinical neuropsychologists were identified as high-augmentation roles - occupations where AI can accelerate research or data processing, but where judgment, patient interaction, or creative synthesis remains central. The skill signal in job postings reinforced the pattern: automation-prone roles saw 7% fewer skills listed overall, while augmentation-prone roles saw a 15% increase in AI-related skill requirements - prompt engineering, AI tool proficiency, human-AI workflow design.[2]
The HBS findings are striking, but they exist in a contested research landscape. In March 2026 - following the HBS Working Knowledge summary of the Srinivasan research - Anthropic released its own labor market analysis introducing a new metric called "observed exposure," which weights actual usage data from Claude traffic rather than theoretical task categorizations alone.[3] The Anthropic paper found that computer programmers sit at the top of the exposure ranking with 75% task coverage, but concluded that there is currently "no systematic increase in unemployment for highly exposed workers since late 2022" - with the caveat that hiring of younger workers has slowed in some exposed occupations.
The Budget Lab at Yale, which tracks monthly employment data, reached a similarly measured verdict in its March 2026 update: occupational dissimilarity is changing faster than historical norms, but the shift predates the widespread introduction of generative AI and shows no meaningful acceleration since ChatGPT's launch. "While anxiety over the effects of AI on today's labor market is widespread," the Budget Lab concluded, "our data suggests it remains largely speculative."[4]
These are not contradictory findings so much as parallel measurements of different phenomena. The HBS paper tracks employer intent as revealed by job postings - a leading indicator of labor demand. The Anthropic and Yale analyses track realized employment and unemployment outcomes - lagging indicators of economic harm. The gap between them may simply be time.
One resolution to the apparent contradiction is structural: job postings can fall without unemployment rising, if the workers who would have filled those roles were never hired in the first place, or if natural attrition absorbs the reduction. A 13% decline in automation-prone postings is a signal that employer demand is contracting - but it need not produce mass layoffs to reshape careers. It produces fewer entry points.
The Anthropic paper's finding that hiring of younger workers has slowed in exposed occupations is the most pointed evidence of this dynamic.[3] The workers most harmed by a contraction in entry-level postings are those who have not yet entered those roles - recent graduates and early-career workers who will find the pipeline narrower than it was for predecessors. That harm is real, diffuse, and difficult to capture in aggregate unemployment statistics.
Srinivasan's own read of his data is consistent with this: the paper's policy recommendation is not crisis management but preemption. "Retraining is essential for jobs where generative AI is reducing skill diversity," he wrote. "In automation-prone occupations, workers may face displacement unless they develop non-automatable skills."[2]
The occupational exposure data from Anthropic's analysis offers a useful map of proximity to risk. Computer programmers lead observed exposure at 75% task coverage, followed by Customer Service Representatives in second position, with Data Entry Keyers at 67%.[3] These are not marginal roles: customer service and data entry alone account for millions of U.S. workers, concentrated among those without four-year degrees - a demographic not well represented in discussions of AI's impact on tech professionals.
The HBS data adds a sector-level dimension: the steepest declines are in finance and technology, sectors whose workers tend to be better-positioned to reskill than those in administrative services or entry-level operations. If the augmentation-versus-automation divide continues to widen, the workers most capable of navigating it may not be those most exposed to displacement - and vice versa.
The honest answer is that the data is early. The HBS paper covers only through March 2025; the most consequential AI productivity tools of 2025 and 2026 postdate its dataset. The Yale Budget Lab is explicit on this point: monthly monitoring is the right posture because "the effects of new technologies are evolving, and a simple snapshot in time is not enough to explicitly determine what the future holds."[4] The scorecard is being written in real time.
Harvard Business School: "Displacement or Complementarity? The Labor Market Impact of Generative AI" - Working Paper No. 25-039, Srinivasan, Chen, Zakerinia, December 2024 (updated August 2025) Inline ↗
HBS Working Knowledge: "Enhance or Eliminate? How AI Will Likely Change These Jobs" - summary of Srinivasan et al. findings with interactive data, February 20, 2026 Inline ↗
Anthropic: "Labor market impacts of AI: A new measure and early evidence" - introducing the observed exposure metric, March 5, 2026 Inline ↗
The Budget Lab at Yale: "Evaluating the Impact of AI on the Labor Market: January/February CPS Update" - March 19, 2026 Inline ↗