Omniscient
AllDaily SignalArticlesReviewsCommentaryFeatured
Sign In

Omniscient

AI intelligence briefings, analysis, and commentary — delivered in broadsheet form.

By Noah Ogbi

Subscribe

Weekday briefings and flagship analysis, delivered to your inbox.

Sections

  • All
  • Daily Signal
  • Articles
  • Reviews
  • Commentary
  • Dialogues

Topics

  • AI Policy
  • AI Research
  • Industry
  • Large Language Models
  • Ethics
  • Agent
  • Amazon
  • AttnRes

Meta

  • About
  • RSS Feed
  • Privacy Policy
  • Terms of Service

Omniscient Media — made by ForeverBuilt, LLC.
© 2026 ForeverBuilt, LLC. All rights reserved.

  1. Home
  2. ›AI Research
  3. ›Companies Are Spending the Most on AI Where It Works the Least

AI Research

Vol. 1·Monday, March 23, 2026

Companies Are Spending the Most on AI Where It Works the Least


Noah Ogbi
MITGenAI
Companies Are Spending the Most on AI Where It Works the Least
Share:

Discussion


Sign in to join the discussion.


Related

AI Research

Vol. 1·Tuesday, May 5, 2026

The Self-Improving Machine: How AI Is Learning to Build Its Own Successors


The Self-Improving Machine: How AI Is Learning to Build Its Own Successors

Jack Clark, co-founder of Anthropic and former policy director at OpenAI, puts the probability of a fully automated AI research pipeline at 60% or higher before the end of 2028. The benchmark evidence he assembles - from coding agents to alignment research - suggests the transition is already underway.


Noah Ogbi
Continue →

AI Research

Vol. 1·Saturday, April 18, 2026

GLM-5.1 and the Benchmark That Got Complicated


GLM-5.1 and the Benchmark That Got Complicated

Z.ai's GLM-5.1 briefly led the SWE-Bench Pro leaderboard with a self-reported 58.4% score, trained entirely on Huawei Ascend chips with no NVIDIA silicon in the stack. The benchmark story has already moved on. The geopolitical one has not.


Noah Ogbi
Continue →

AI Research

Vol. 1·Friday, April 17, 2026

The MCP Deep Dive: What It Is, How It Works, Why It's Broken, and What Comes Next


The MCP Deep Dive: What It Is, How It Works, Why It's Broken, and What Comes Next

Model Context Protocol is the closest thing AI has to a universal plug standard - and it arrived with the same security debt that plagued every previous universal plug standard. A comprehensive technical guide to MCP architecture, attack surfaces, optimization, and one uncomfortable prediction about where this is all heading.


Noah Ogbi
Continue →

The numbers are hard to square. Global AI spending is on track to hit $2.52 trillion in 2026, a 44% increase over last year, according to Gartner.[1] Meanwhile, a July 2025 report from MIT's NANDA project found that 95% of task-specific, integrated enterprise AI deployments deliver zero measurable P&L impact - this, against a backdrop of $30 to $40 billion already deployed into enterprise GenAI.[2] You do not need to be an economist to sense something has gone badly wrong with the allocation.

The MIT report offers a specific explanation for why. When researchers asked executives to distribute a hypothetical $100 across business functions, sales and marketing captured approximately 70% of GenAI budgets - a figure the report itself notes may be closer to 50% depending on the mix of companies surveyed.[2] Back-office automation - document processing, accounts payable, compliance monitoring, internal workflow orchestration - consistently outperformed every other category on actual ROI, yet it attracted a small fraction of total spend. The money is going where the cameras are pointed, not where the returns are.

Why does AI investment follow visibility, not value?

Back-office automation is genuinely unglamorous. It rebuilds margins quietly, through reduced BPO spend, faster contract processing, and fewer manual exceptions in finance workflows. These gains are real, but they do not make for compelling board presentations or LinkedIn announcements. Sales and marketing AI, by contrast, is loud: AI-generated outreach, smart lead scoring, personalized campaign content. Metrics appear on dashboards within days. The causal chain from tool to revenue looks neat, even if the underlying returns are thin.

This is the structural incentive problem the MIT data exposes. Executives fund what they can attribute, and attribution models overwhelmingly favor visible, top-line functions. The result is a kind of collective misallocation - every individual budget decision looks rational, but in aggregate they point enterprise AI investment toward its weakest applications.

What the hype cycle tells us about what comes next

Gartner has placed AI squarely in the "trough of disillusionment," with recovery still ahead.[1] The trough arrives when early pilots fail to scale, when proof-of-concept energy collides with operational reality. Gartner's framing implies the hangover is not yet finished.

The critical question is which departments absorb the most pain when disillusionment peaks. If 70% of GenAI budgets are flowing into sales and marketing, that is where the failed pilots are concentrated. Those are also the functions where executives are most exposed - where overpromised AI tools produce no visible lift despite visible spend. The trough, in other words, will not strike AI budgets evenly. It will hit hardest in exactly the departments that received the most investment.

The shadow AI economy complicates the picture

There is a revealing subplot buried in the MIT data. While official enterprise AI initiatives stalled, a parallel economy of personal AI use flourished. Workers from over 90% of surveyed companies reported regular use of personal AI tools - ChatGPT, Claude, and similar - for work tasks, even as only 40% of those same companies had purchased an official LLM subscription.[2] Employees were, in effect, solving the ROI problem themselves, routing around the corporate procurement cycle entirely.

This matters for understanding where AI value actually lives. The productivity gains showing up in individual workflows - faster drafting, quicker research, better first drafts of code - are real. They are just not appearing in P&L reports because they are not being captured by sanctioned tools. The enterprise AI problem is not purely a technology problem. It is a measurement and incentive problem dressed up as one.

The advertising industry is doubling down anyway

None of this appears to be slowing the marketing industry's appetite. The IAB's 2026 Outlook Study, based on responses from more than 200 brands and agency buyers, found that five of the six top areas of increased advertiser focus are directly tied to AI.[3] Two-thirds of buyers are now focused specifically on agentic AI for ad buying and campaign execution. U.S. ad spend overall is projected to grow 9.5% in 2026, with AI described by IAB as having moved from experimentation to core infrastructure.

This is precisely the dynamic the MIT data should be warning against. The marketing sector is accelerating its AI investment at the moment the evidence most clearly suggests it is the wrong place to concentrate it. Agentic AI for media buying may prove different from chatbots for outbound email - the technology is meaningfully more sophisticated. But the underlying incentive structure has not changed. Marketers still measure what is easy to measure, fund what produces visible outputs, and declare success in a language that boards understand.

What a reallocation would actually look like

The MIT report's finding that external vendor partnerships outperform internal builds by a factor of two is instructive here.[2] The organizations crossing what the report calls the "GenAI Divide" are not the ones with the largest AI budgets or the most ambitious internal programs - they are the ones that targeted specific, process-level problems in operational functions and demanded measurable business outcomes rather than software benchmarks from their vendors. Mid-market companies reached full implementation in an average of 90 days; enterprise-scale organizations took nine months or more.

The implication is not that enterprises should abandon AI in marketing. It is that the current allocation - roughly 70 cents of every AI dollar pointed at top-line functions, with a handful of cents going to the back office - is badly inverted relative to where measurable returns actually concentrate. Correcting that imbalance will require CFOs willing to fund unglamorous infrastructure and executives willing to present boring wins to their boards. That is a cultural problem, not a technical one. And culture changes more slowly than hype cycles.


Sources

  1. Gartner: Global AI spending to reach $2.5 trillion in 2026 - Computerworld Inline ↗

  2. The GenAI Divide: State of AI in Business 2025 - MIT NANDA (PDF) Inline ↗

  3. IAB 2026 Outlook Study Forecasts 9.5% Growth in U.S. Ad Spend - PR Newswire Inline ↗