
Last December, Anthropic asked its users a simple question: what do you actually want from AI? The method was novel - an AI-powered interviewer conducted open-ended conversations with 80,508 Claude users across 159 countries and 70 languages over one week.[1] The result is what Anthropic describes as the largest and most multilingual qualitative study ever conducted. The findings, released this week, are both clarifying and unsettling.
The single largest category of desire - cited by 18.8% of respondents - was professional excellence: the wish to offload routine cognitive work so they could focus on higher-order problems.[1] A healthcare worker in the United States described receiving 100 to 150 text messages per day from doctors and nurses before AI: "Since implementing AI, the pressure of documentation has been lifted. I have more patience with nurses, more time to explain things to family members."
The next two clusters - personal transformation (13.7%) and life management (13.5%) - are revealing in what they suggest about the gap between how AI is often marketed and what users actually seek. People are not primarily asking for better benchmarks or faster tokens. They want more time with their children, a clearer head, less mental load. A fourth cluster, time freedom (11.1%), captures this most directly: "With AI support I can now leave work on time to pick up my kids from school," wrote a software engineer in Mexico.[1]
Entrepreneurship (8.7%) and financial independence (9.7%) appear prominently, and their texture is different from Silicon Valley's AI-optimism. Some of the most vivid voices in these categories come from outside wealthy economies. "I'm in a tech-disadvantaged country, and I can't afford many failures," wrote an entrepreneur in Cameroon. "With AI, I've reached professional level in cybersecurity, UX design, marketing, and project management simultaneously... It's an equalizer."[1]
When asked whether AI had already taken a step toward their vision, 81% said yes.[1] That is a strikingly high satisfaction figure for a technology that critics routinely dismiss as unreliable. And yet the second-largest delivery category, at 18.9%, was "AI hasn't delivered" - respondents for whom the gap between aspiration and reality remains wide. "AI should be cleaning windows and emptying the dishwasher so I can paint and write poetry," wrote one respondent from Germany. "Right now it's exactly the other way around."
The study surfaces a deeper paradox: what people love most about AI and what they fear most often occupy the same conceptual space. A lawyer in Israel who uses AI to review contracts and save time worries simultaneously that she is "losing my ability to read by myself. Thinking was the last frontier." A technical support specialist in the United States was simply displaced: "I got laid off from my job in May because my company wanted to replace me with an AI system." Hope and fear do not divide respondents into camps - they coexist as tensions within individuals.
Sign in to join the discussion.

Mistral's latest open-weight release consolidates its reasoning, vision, and coding model lines into a single 119B MoE - a deliberate bet that versatility beats specialization. We examine whether the tradeoffs hold up.
The study's methodology is itself worth examining. Anthropic used its own "Anthropic Interviewer" - a version of Claude prompted to conduct structured conversations - to collect responses, and then used Claude-powered classifiers to categorize what it heard.[1] The company acknowledges the circularity: an AI company using its own AI to ask its own users what they think of AI, then using that same AI to analyze the answers. Participants were Claude.ai account holders, skewing toward users who have already chosen to engage with AI, which likely tilts sentiment in a positive direction.
Still, the scale and geographic breadth are unprecedented for qualitative research of this kind. 159 countries. 70 languages. The range of voices - from a software engineer in South Korea worried that "humanity has never dealt with something smarter than itself" to an entrepreneur in Honduras describing AI as "a shadow of me, just a very, very long one" - represents something genuinely new: a global snapshot of how AI feels to the people actually living with it, not just the people building it or regulating it.
Anthropic's interest in publishing this data is not purely altruistic - demonstrating that Claude is broadly beneficial is central to the company's brand and regulatory positioning. But the findings stand on their own. The clearest signal in the data may be the simplest: people do not primarily want AI to be smarter. They want it to make their lives more livable. That is a different design brief than the one most labs appear to be executing against.