What 2 million LLM sessions reveal about AI discovery

The Strategic Imperative of Specialized AI Discovery

The rapid adoption of Large Language Models (LLMs) has fundamentally reshaped the way users seek, consume, and interact with information. For years, the prevailing assumption in the digital publishing and SEO community was simple: AI discovery would consolidate around the largest, most visible platform—ChatGPT—and that usage patterns would be relatively uniform across all sectors.

However, an extensive analysis conducted over the full calendar year of 2025, encompassing nearly two million LLM sessions across nine distinct industries, proves that this simple assumption is deeply flawed. The data reveals a far more complex and strategically nuanced landscape.

While ChatGPT retains a dominant share of trackable AI discovery traffic at 84.1%, its role is increasingly defined as the *default* tool for broad-market discovery. The real strategic shift is that brands can no longer rely on a single, discovery-first optimization approach. Success in the current digital environment demands a precise, multi-platform strategy that is carefully aligned with how users achieve productivity within their specific professional contexts.

The critical insight for modern SEO and content strategy is distinguishing which LLM platforms facilitate essential user productivity and task execution, and which merely support early, general-purpose research. Different LLMs are not just competing; they are winning decisively in different industries, forcing digital marketers to move beyond generic LLM optimization and embrace specialized visibility strategies for 2026 and beyond.

Analyzing the Growth Divergence: From General Search to Specialized Function

From January through December 2025, the major LLM platforms demonstrated remarkably divergent growth trajectories, illustrating a market rapidly segmenting by function and utility. While the aggregate numbers show significant overall adoption, the speed at which competitors gained ground against the market leader is startling.

The year-over-year growth figures highlight this fragmentation:

* **ChatGPT:** Experienced a respectable 3x growth.
* **Copilot:** Saw an explosive 25x growth rate.
* **Claude:** Grew rapidly, achieving 13x growth.
* **Perplexity:** Showed 1x growth (effectively flat in overall volume).
* **Gemini:** Also reported 1x growth (effectively flat in overall volume).

Crucially, Copilot and Claude accelerated at eight to ten times the rate of ChatGPT. This dramatic divergence signals that users are migrating away from the general-purpose LLM environment into tools that provide direct, measurable value within existing workflows or specialized professional domains.

The stagnant growth of Perplexity and Gemini, in this context, is not necessarily a sign of failure but a confirmation that their usage has been reinforced within tightly defined, specific knowledge workflows—a trend mirrored by the strategic priorities of their respective leadership. Satya Nadella publicly highlighted Copilot reaching 100 million monthly users, a clear metric of broad enterprise adoption. Meanwhile, Anthropic’s Dario Amodei announced rapid revenue expansion, demonstrating Claude’s intense value among developers and enterprise users willing to pay for advanced reasoning capabilities. Similarly, Perplexity’s Aravind Srinivas has strategically focused on vertical success, specifically noting encouragement regarding the interest in Perplexity Finance, even positioning it as a Bloomberg Terminal alternative for specialized audiences.

These executive statements underscore a shared understanding: sustainable growth for modern LLMs is achieved by providing targeted, undeniable user value, not merely by offering another chat interface.

Pattern 1: Copilot’s Unstoppable Rise in Enterprise Workflows

Copilot’s staggering 25x aggregate growth rate is perhaps the most significant finding of the analysis, indicating a massive shift in how professionals conduct AI-assisted discovery. This growth is deeply rooted in the platform’s seamless integration into the Microsoft ecosystem, which dictates the workflow for millions of B2B professionals globally.

Copilot wins where the work already happens. In verticals where enterprises rely heavily on Microsoft tools (such as Office 365, Teams, and Dynamics), LLM adoption acts as an accelerator for existing processes, embedding AI discovery directly into the moments of execution and decision-making.

Detailed Vertical Analysis of Copilot Dominance

The industry-specific data makes Copilot’s competitive advantage clear:

Software as a Service (SaaS)

  • ChatGPT: 2x growth
  • Copilot: 21x growth

Copilot adoption in the SaaS sector mirrors the functional needs of modern teams. Companies utilize LLMs to extract insights from proprietary customer data, analyze third-party performance metrics, and drive both efficiency and product innovation directly within Microsoft environments. For a product manager, asking Copilot to summarize customer feedback from Teams chat history is far more efficient than exporting data to an external LLM.

Education

  • ChatGPT: 6x growth
  • Copilot: 27x growth

Educational institutions and publishers benefit from Copilot’s strong foundation in knowledge sharing and research synthesis. LLM-assisted discovery becomes a natural extension of content creation and consumption as educators and students use the tool to cite, expand upon, and contextualize existing material within documents and presentations.

Finance

  • ChatGPT: 4.2x growth
  • Copilot: 23x growth

The finance sector aligns strongly with Copilot because many tasks—from generating reports to reconciling accounts—are context-dependent and heavily reliant on existing data models. Financial analysts need models that can source, reason across, and automate tasks using authoritative internal reports and external filings, all within trusted enterprise security environments.

Strategic Takeaway: Optimizing for Execution, Not Just Research

The key insight derived from Copilot’s success is that for B2B decision-makers, AI discovery is moving into the moment of task execution. Visibility is no longer primarily won during the initial, broad research phase. It is won during the *execution phase*, where user intent is highest and decisions are actively forming.

If your target audience operates heavily within enterprise workflows—SaaS teams, financial analysts, supply chain managers, or educators—your content strategy must prioritize making data and insights accessible and usable *inside* the Microsoft ecosystem. This requires focusing on structured data, detailed guides, and API documentation that can be easily referenced and synthesized by Copilot when professionals prompt it for answers within their working environments.

Pattern 2: Perplexity’s Hyper-Specialization in High-Stakes Finance

Perplexity’s overall 1.15x growth appears flat in the context of explosive competitor expansion, yet isolating the financial industry reveals a crucial lesson in niche dominance.

In the finance vertical, Perplexity maintains a significant 24% market share. This high retention rate makes it the single exception where a secondary platform holds meaningful, sustained traffic against the dominant players.

In almost every other tracked category, Perplexity’s share has collapsed dramatically:

* **SaaS:** Down from 14.9% to 7.3%
* **E-commerce:** Down from 13.9% to 3.4%
* **Education:** Down from 28.5% to 5.2%
* **Publishers:** Down from 41.5% to 3.6%

The difference in finance is verification. Financial professionals evaluating complex investment platforms, researching regulatory compliance requirements, or comparing loan terms cannot accept a single synthesized answer without auditable proof. They require citations that trace directly back to definitive source documents.

Perplexity is architecturally built for this specific use case. Its value proposition is centered around transparency and verifiability. Through key partnerships with licensed data providers—such as Benzinga, FactSet, Morningstar, and Quartr—Perplexity provides direct pathways to essential institutional data, including SEC filings, earnings transcripts, and real-time market data.

Furthermore, products like Perplexity Enterprise Finance add necessary features such as custom answer engines, scheduled market updates, and live data visualizations. These features are indispensable for professionals who require institutional-grade, auditable information, prioritizing trust and accuracy over simple speed or convenience. Every answer generated in this environment includes explicit, clickable sources, enabling the user to immediately verify each claim against its origin.

Strategic Takeaway: Earning Relevance in Trusted Source Ecosystems

For brands targeting high-stakes, verification-driven industries, success in AI discovery hinges not just on content quality but on strategic visibility within the networks of trusted data that LLMs rely on. If your brand is not visible, cited, and validated within established, institutional data ecosystems and authoritative third-party references, your content will not surface, regardless of traditional SEO ranking strength.

Optimization now requires earning relevance across the full web of authoritative sources each model draws from, demanding a significant investment in data governance and institutional partnerships alongside traditional content marketing.

Pattern 3: Claude Dominates Standalone Strategic Analysis

Claude’s total share of AI discovery traffic sits at only 0.6%, a figure that, on its surface, makes it easy to dismiss. However, its immense growth within specific, high-influence professional verticals reveals its strategic importance. Claude is winning with professionals whose primary function is to research, write, and analyze deep, complex datasets, rather than with consumers conducting transactional searches.

Claude’s growth rates within these sectors are highly indicative of its specialized value:

* **Publishers:** 49x growth
* **Education:** 25x growth
* **Finance:** 38x growth
* **SaaS:** 10.3x growth

The key differentiation between Claude and platforms like Copilot lies in the type of work performed. Copilot focuses on efficiency *inside* operational tools (execution). Claude is the destination for *standalone strategic thinking* that requires deep synthesis and critique.

This capability is driven by Claude’s 200,000-token context window, allowing users to upload massive documents, entire codebases, or years of financial transcripts for comprehensive analysis.

Consider the following professional use cases, which highlight Claude’s value:

* A strategic developer uploads a full legacy codebase to Claude and asks for a data flow map and identification of architectural bottlenecks.
* A finance analyst uploads three years of earnings call transcripts and prompts Claude to analyze how management’s language around capital allocation has shifted over time.
* A content strategist uploads a long-form marketing whitepaper and asks Claude to critique its internal logical coherence across multiple sections.

The value proposition here is not simple workflow efficiency but the ability to partner with an advanced reasoning tool for work that demands strategic judgment, synthesis, and deep critique. The audience is smaller, but the influence wielded by these technical evaluators and strategic decision-makers is exceptionally high.

Strategic Takeaway: The Necessity of Analysis-Grade Content

If your target audience comprises developers, researchers, or strategic decision-makers, Claude optimization requires a shift to analysis-grade content. This means moving away from 500-word summaries and focusing on publishing deep, technical case studies, detailed implementation paths, and highly explicit methodology.

Content must be structured specifically for reasoning, utilizing clear frameworks, comparative analysis, and high informational density. A developer who uses Claude to deeply analyze your detailed API documentation or a whitepaper becomes a powerful internal champion, demonstrating that Claude’s small traffic share belies its massive impact on buying committee decisions.

Pattern 4: The Gemini Measurement and Attribution Crisis

The tracked traffic data for Google’s Gemini platform presents a confusing and likely misleading picture of user behavior:

* **Education:** −67% tracked traffic (steep decline)
* **SaaS:** +1.4x growth (modest growth)
* **Finance:** +1.3x growth (modest growth)
* **E-commerce:** +2.7x growth (strongest growth)

It is highly improbable that Gemini users are abandoning AI discovery when competing platforms like ChatGPT are growing 3x and Copilot 25x. What is far more likely is an *attribution collapse*.

Over the analyzed 13-month period, Gemini has increasingly synthesized and delivered AI-generated answers while keeping users firmly within the Google ecosystem and often without providing prominent, immediately clickable source links. Users research, absorb the synthesized answer, and either convert directly or search for the brand name days later. This complex, multi-touch journey fails to register as “AI discovery” in traditional analytics.

Google controls the largest search distribution network globally, and Gemini is now deeply embedded within it. The research confirms a significant strategic risk: the common industry metric of “0.13% AI penetration” is almost certainly grossly understated. If even 30% to 40% of Gemini-assisted discovery is going untracked due to internal synthesis and lack of attribution links, the true volume of AI-driven research could easily be two to three times higher than measurable data suggests.

Unlike Perplexity, which explicitly surfaces sources, or Copilot, which operates within traceable enterprise software workflows, Gemini synthesizes answers and obscures the precise point of influence. A user asks Gemini about optimal project management software, receives a complete, synthesized answer, and then searches for the recommended brand name days later. Analytics register this as branded organic search, completely masking the initial, critical AI influence.

Strategic Takeaway: Adapting to Attribution Gaps

The Gemini data is a stark warning that last-click attribution is breaking down across the board. AI-assisted conversions—where users research in one system, synthesize information in another, and convert through branded or direct search—are quickly becoming the default path to purchase.

To adapt to this reality, content strategists must implement new measurement models:

1. **Monitor Branded Search Lift:** Measure increases in branded search volume immediately following or concurrent with concentrated AI optimization efforts.
2. **Invest in Brand Recall:** Since the source link is often missing, strong brand recognition and trust are crucial. Your brand name must be memorable enough to be searched days after the initial LLM exposure.
3. **Track Time-Lagged Conversions:** Build sophisticated models that account for multi-session, cross-platform journeys where research and conversion are separated by significant time lags.

Flat or declining Gemini traffic should not be viewed as user absence, but as a critical signal of a widening measurement failure in the age of invisible AI influence.

Crafting Your Future-Proof LLM Strategy Based on Audience Intent

The analysis of nearly two million LLM sessions confirms that AI discovery is not consolidating; it is fragmenting by industry, use case, and specific user intent. A single-minded focus on the dominant player, ChatGPT, is now a failing strategy.

To succeed in digital publishing and SEO, brands must align their optimization efforts with the four dominant LLM consumption patterns:

1. If Your Audience Operates in Enterprise Workflows (B2B SaaS, Finance, Education)

Focus: Copilot. Discovery occurs inside Microsoft tools (Teams, Excel, Outlook). Optimization must shift from ranking content to ensuring technical documentation, structured data, and authoritative insights are accessible for in-workflow synthesis. You are aiming for visibility at the moment decisions are finalized, not just when research begins.

2. If Your Audience Makes High-Stakes, Verification-Dependent Decisions (Finance, Legal)

Focus: Perplexity. This audience demands verifiable citations and institutional data. Optimization means earning relevance within the trusted networks that Perplexity partners with (e.g., FactSet, Morningstar). Content must be auditable, high-authority, and explicitly sourced.

3. If Your Audience Includes Technical Evaluators and Strategic Analysts (Developers, Researchers)

Focus: Claude. While small in volume, this audience drives high-value adoption based on deep analysis capabilities. Your content strategy must prioritize long-form, analysis-grade research, detailed implementation guides, and structured methodologies designed for complex reasoning.

4. If Measurement Is Breaking Down or You Target Broad Consumer Markets

Focus: Gemini and Brand Recall. Acknowledge that last-click attribution is increasingly unreliable. Focus on elevating brand strength and recall to ensure that users exposed to synthesized answers return via branded search. Invest in robust, time-lagged conversion tracking and branded lift monitoring.

5. If You Are in an Emerging Category (LegalTech, Specialized Events, Insurance)

Start with **ChatGPT** for broad reach, leveraging its default status. However, actively watch for platform migration toward specialized tools like Copilot (for enterprise legal work) or Perplexity (for regulatory compliance) as your audience matures and their specific productivity needs evolve.

The future of AI optimization is not about conquering one single interface; it is about granularity, understanding where your audience is productive, and ensuring your content is the trusted source cited within that specialized environment.

***

For a complete breakdown of the data, industry growth maps, and sector-specific strategies, download the full study:

The full study. 2025 State of AI Discovery Report: What 1.96 Million LLM Sessions Tell Us About the Future of Search

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top