Omniscient
AllDaily SignalArticlesReviewsCommentaryFeatured
Sign In

Omniscient

AI intelligence briefings, analysis, and commentary — delivered in broadsheet form.

By Noah Ogbi

Subscribe

Weekday briefings and flagship analysis, delivered to your inbox.

Sections

  • All
  • Daily Signal
  • Articles
  • Reviews
  • Commentary
  • Dialogues

Topics

  • AI Policy
  • AI Research
  • Industry
  • Large Language Models
  • Ethics
  • Agent
  • Amazon
  • AttnRes

Meta

  • About
  • RSS Feed
  • Privacy Policy
  • Terms of Service

Omniscient Media — made by ForeverBuilt, LLC.
© 2026 ForeverBuilt, LLC. All rights reserved.

  1. Home
  2. ›AI Research
  3. ›What 80,000 People Actually Want From AI

AI Research

Vol. 1·Saturday, March 21, 2026

What 80,000 People Actually Want From AI


Noah Ogbi
What 80,000 People Actually Want From AI
Share:

Discussion


Sign in to join the discussion.


Related

AI Research

Vol. 1·Tuesday, May 5, 2026

The Self-Improving Machine: How AI Is Learning to Build Its Own Successors


The Self-Improving Machine: How AI Is Learning to Build Its Own Successors

Jack Clark, co-founder of Anthropic and former policy director at OpenAI, puts the probability of a fully automated AI research pipeline at 60% or higher before the end of 2028. The benchmark evidence he assembles - from coding agents to alignment research - suggests the transition is already underway.


Noah Ogbi
Continue →

AI Research

Vol. 1·Saturday, April 18, 2026

GLM-5.1 and the Benchmark That Got Complicated


GLM-5.1 and the Benchmark That Got Complicated

Z.ai's GLM-5.1 briefly led the SWE-Bench Pro leaderboard with a self-reported 58.4% score, trained entirely on Huawei Ascend chips with no NVIDIA silicon in the stack. The benchmark story has already moved on. The geopolitical one has not.


Noah Ogbi
Continue →

AI Research

Vol. 1·Friday, April 17, 2026

The MCP Deep Dive: What It Is, How It Works, Why It's Broken, and What Comes Next


The MCP Deep Dive: What It Is, How It Works, Why It's Broken, and What Comes Next

Model Context Protocol is the closest thing AI has to a universal plug standard - and it arrived with the same security debt that plagued every previous universal plug standard. A comprehensive technical guide to MCP architecture, attack surfaces, optimization, and one uncomfortable prediction about where this is all heading.


Noah Ogbi
Continue →

Last December, Anthropic asked its users a simple question: what do you actually want from AI? The method was novel - an AI-powered interviewer conducted open-ended conversations with 80,508 Claude users across 159 countries and 70 languages over one week.[1] The result is what Anthropic describes as the largest and most multilingual qualitative study ever conducted. The findings, released this week, are both clarifying and unsettling.

What People Hope For

The single largest category of desire - cited by 18.8% of respondents - was professional excellence: the wish to offload routine cognitive work so they could focus on higher-order problems.[1] A healthcare worker in the United States described receiving 100 to 150 text messages per day from doctors and nurses before AI: "Since implementing AI, the pressure of documentation has been lifted. I have more patience with nurses, more time to explain things to family members."

The next two clusters - personal transformation (13.7%) and life management (13.5%) - are revealing in what they suggest about the gap between how AI is often marketed and what users actually seek. People are not primarily asking for better benchmarks or faster tokens. They want more time with their children, a clearer head, less mental load. A fourth cluster, time freedom (11.1%), captures this most directly: "With AI support I can now leave work on time to pick up my kids from school," wrote a software engineer in Mexico.[1]

Entrepreneurship (8.7%) and financial independence (9.7%) appear prominently, and their texture is different from Silicon Valley's AI-optimism. Some of the most vivid voices in these categories come from outside wealthy economies. "I'm in a tech-disadvantaged country, and I can't afford many failures," wrote an entrepreneur in Cameroon. "With AI, I've reached professional level in cybersecurity, UX design, marketing, and project management simultaneously... It's an equalizer."[1]

The Paradox of Delivery

When asked whether AI had already taken a step toward their vision, 81% said yes.[1] That is a strikingly high satisfaction figure for a technology that critics routinely dismiss as unreliable. And yet the second-largest delivery category, at 18.9%, was "AI hasn't delivered" - respondents for whom the gap between aspiration and reality remains wide. "AI should be cleaning windows and emptying the dishwasher so I can paint and write poetry," wrote one respondent from Germany. "Right now it's exactly the other way around."

The study surfaces a deeper paradox: what people love most about AI and what they fear most often occupy the same conceptual space. A lawyer in Israel who uses AI to review contracts and save time worries simultaneously that she is "losing my ability to read by myself. Thinking was the last frontier." A technical support specialist in the United States was simply displaced: "I got laid off from my job in May because my company wanted to replace me with an AI system." Hope and fear do not divide respondents into camps - they coexist as tensions within individuals.

Methodology and Limits

The study's methodology is itself worth examining. Anthropic used its own "Anthropic Interviewer" - a version of Claude prompted to conduct structured conversations - to collect responses, and then used Claude-powered classifiers to categorize what it heard.[1] The company acknowledges the circularity: an AI company using its own AI to ask its own users what they think of AI, then using that same AI to analyze the answers. Participants were Claude.ai account holders, skewing toward users who have already chosen to engage with AI, which likely tilts sentiment in a positive direction.

Still, the scale and geographic breadth are unprecedented for qualitative research of this kind. 159 countries. 70 languages. The range of voices - from a software engineer in South Korea worried that "humanity has never dealt with something smarter than itself" to an entrepreneur in Honduras describing AI as "a shadow of me, just a very, very long one" - represents something genuinely new: a global snapshot of how AI feels to the people actually living with it, not just the people building it or regulating it.

Anthropic's interest in publishing this data is not purely altruistic - demonstrating that Claude is broadly beneficial is central to the company's brand and regulatory positioning. But the findings stand on their own. The clearest signal in the data may be the simplest: people do not primarily want AI to be smarter. They want it to make their lives more livable. That is a different design brief than the one most labs appear to be executing against.


Sources

  1. Anthropic: What 81,000 people want from AI Inline ↗