How people really use AI: The surprising truth from analysing billions of interactions
For the previous yr, we’ve been advised that synthetic intelligence is revolutionising productiveness—serving to us write emails, generate code, and summarise paperwork. But what if the truth of how people really use AI is totally totally different from what we’ve been led to consider?
An information-driven study by OpenRouter has simply pulled again the curtain on real-world AI utilization by analysing over 100 trillion tokens—primarily billions upon billions of conversations and interactions with massive language fashions like ChatGPT, Claude, and dozens of others. The findings problem many assumptions concerning the AI revolution.
OpenRouter is a multi-model AI inference platform that routes requests throughout greater than 300 fashions from over 60 suppliers—from OpenAI and Anthropic to open-source options like DeepSeeok and Meta’s LLaMA.
With over 50% of its utilization originating outdoors the United States and serving tens of millions of builders globally, the platform presents a singular cross-section of how AI is definitely deployed throughout totally different geographies, use circumstances, and person sorts.
Importantly, the research analysed metadata from billions of interactions with out accessing the precise textual content of conversations, preserving person privateness whereas revealing behavioural patterns.

The roleplay revolution no one noticed coming
Perhaps probably the most surprising discovery: greater than half of all open-source AI mannequin utilization isn’t for productiveness in any respect. It’s for roleplay and inventive storytelling.
Yes, you learn that proper. While tech executives tout AI’s potential to remodel enterprise, customers are spending the bulk of their time participating in character-driven conversations, interactive fiction, and gaming eventualities.
Over 50% of open-source mannequin interactions fall into this class, dwarfing even programming help.

“This counters an assumption that LLMs are largely used for writing code, emails, or summaries,” the report states. “In actuality, many customers have interaction with these fashions for companionship or exploration.”
This isn’t simply informal chatting. The knowledge exhibits customers deal with AI fashions as structured roleplaying engines, with 60% of roleplay tokens falling beneath particular gaming eventualities and inventive writing contexts. It’s a large, largely invisible use case that’s reshaping how AI corporations take into consideration their merchandise.
Programming’s meteoric rise
While roleplay dominates open-source utilization, programming has turn into the fastest-growing class throughout all AI fashions. At the beginning of 2025, coding-related queries accounted for simply 11% of complete AI utilization. By the top of the yr, that determine had exploded to over 50%.
This development displays AI’s deepening integration into software program growth. Average immediate lengths for programming duties have grown fourfold, from round 1,500 tokens to over 6,000, with some code-related requests exceeding 20,000 tokens—roughly equal to feeding a whole codebase into an AI mannequin for evaluation.
For context, programming queries now generate some of the longest and most advanced interactions in all the AI ecosystem. Developers aren’t simply asking for easy code snippets anymore; they’re conducting refined debugging periods, architectural evaluations, and multi-step drawback fixing.
Anthropic’s Claude fashions dominate this house, capturing over 60% of programming-related utilization for many of 2025, although competitors is intensifying as Google, OpenAI, and open-source options achieve floor.

The Chinese AI surge
Another main revelation: Chinese AI fashions now account for about 30% of world utilization—practically triple their 13% share in the beginning of 2025.
Models from DeepSeeok, Qwen (Alibaba), and Moonshot AI have quickly gained traction, with DeepSeeok alone processing 14.37 trillion tokens in the course of the research interval. This represents a elementary shift within the world AI panorama, the place Western corporations now not maintain unchallenged dominance.
Simplified Chinese is now the second-most widespread language for AI interactions globally at 5% of complete utilization, behind solely English at 83%. Asia’s general share of AI spending greater than doubled from 13% to 31%, with Singapore rising because the second-largest nation by utilization after the United States.

The rise of “Agentic” AI
The research introduces an idea that can outline AI’s subsequent part: agentic inference. This means AI fashions are now not simply answering single questions—they’re executing multi-step duties, calling exterior instruments, and reasoning throughout prolonged conversations.
The share of AI interactions categorised as “reasoning-optimised” jumped from practically zero in early 2025 to over 50% by yr’s finish. This displays a elementary shift from AI as a textual content generator to AI as an autonomous agent succesful of planning and execution.
“The median LLM request is now not a easy query or remoted instruction,” the researchers clarify. “Instead, it’s half of a structured, agent-like loop, invoking exterior instruments, reasoning over state, and persisting throughout longer contexts.”
Think of it this fashion: as an alternative of asking AI to “write a operate,” you’re now asking it to “debug this codebase, establish the efficiency bottleneck, and implement an answer”—and it may really do it.
The “Glass Slipper Effect”
One of the research’s most fascinating insights pertains to person retention. Researchers found what they name the Cinderella “Glass Slipper” impact—a phenomenon the place AI fashions which can be “first to resolve” a vital drawback create lasting person loyalty.
When a newly launched mannequin completely matches a beforehand unmet want—the metaphorical “glass slipper”—these early customers stick round far longer than later adopters. For instance, the June 2025 cohort of Google’s Gemini 2.5 Pro retained roughly 40% of customers at month 5, considerably larger than later cohorts.
This challenges standard knowledge about AI competitors. Being first issues, however particularly being first to resolve a high-value drawback creates a sturdy aggressive benefit. Users embed these fashions into their workflows, making switching pricey each technically and behaviorally.
Cost doesn’t matter (as a lot as you’d suppose)
Perhaps counterintuitively, the research reveals that AI utilization is comparatively price-inelastic. A ten% lower in worth corresponds to solely a couple of 0.5-0.7% enhance in utilization.
Premium fashions from Anthropic and OpenAI command $2-35 per million tokens whereas sustaining excessive utilization, whereas price range choices like DeepSeeok and Google’s Gemini Flash obtain comparable scale at beneath $0.40 per million tokens. Both coexist efficiently.
“The LLM market doesn’t appear to behave like a commodity simply but,” the report concludes. “Users stability value with reasoning high quality, reliability, and breadth of functionality.”
This means AI hasn’t turn into a race to the underside on pricing. Quality, reliability, and functionality nonetheless command premiums—a minimum of for now.
What this implies going ahead
The OpenRouter research paints an image of real-world AI utilization that’s much more nuanced than trade narratives counsel. Yes, AI is reworking programming {and professional} work. But it’s additionally creating totally new classes of human-computer interplay by roleplay and inventive functions.
The market is diversifying geographically, with China rising as a serious power. The know-how is evolving from easy textual content technology to advanced, multi-step reasoning. And person loyalty relies upon much less on being first to market than on being first to actually remedy an issue.
As the report notes, “methods by which people use LLMs don’t at all times align with expectations and range considerably nation by nation, state by state, use case by use case.”
Understanding these real-world patterns—not simply benchmark scores or advertising claims—can be essential as AI turns into additional embedded in day by day life. The hole between how we predict AI is used and the way it’s really used is wider than most realise. This research helps shut that hole.
See additionally: Deep Cogito v2: Open-source AI that hones its reasoning skills

Want to be taught extra about AI and massive knowledge from trade leaders? Check out AI & Big Data Expo going down in Amsterdam, California, and London. The complete occasion is an element of TechEx and is co-located with different main know-how occasions together with the Cyber Security Expo. Click here for extra info.
AI News is powered by TechForge Media. Explore different upcoming enterprise know-how occasions and webinars here.
The publish How people really use AI: The surprising truth from analysing billions of interactions appeared first on AI News.
