Introduction: The Shift from Keywords to Conversations
For over two decades, search engine optimization has been built on the foundation of the keyword. Marketers relied on tools like Google Keyword Planner, SEMrush, and Ahrefs to understand exactly what users were typing into search bars. This data was transparent, predictable, and measurable. However, as we enter the era of Generative AI and Large Language Models (LLMs), the landscape is shifting from fragmented keywords to complex, conversational prompts.
Today, users are no longer just searching for “best hiking boots.” Instead, they are asking AI assistants to “find me a pair of waterproof hiking boots suitable for the rocky terrain of Glacier National Park that cost under $200 and have a wide toe box.” This shift represents a significant challenge for digital marketers: how do we track visibility in a world where search queries have become full-length paragraphs? The “black box” of AI search has arrived, leaving many SEO professionals wondering which prompts they should even be tracking.
While third-party tools are emerging to help bridge this gap, one of the most powerful data sources might already be sitting right in front of you. By leveraging specific filters within Google Search Console (GSC), you can uncover the conversational prompts users are actually using to find your site, providing a rare window into the mind of the modern AI-driven searcher.
The Challenge of LLM Visibility and the Black Box Problem
The core issue with tracking AI search performance is the lack of public data. Unlike traditional search, where Google provides a wealth of information regarding search volume and competition, OpenAI (ChatGPT), Anthropic (Claude), and even Google’s own Gemini are much more guarded with their internal query data. While there have been regulatory pushes for more transparency—such as recent proposals by the UK’s Competition and Markets Authority (CMA)—most experts expect tech giants to provide the bare minimum in terms of data sharing.
This leaves marketers in a difficult position. If you don’t know the prompts users are using to trigger mentions of your brand within an LLM, you cannot optimize your content to appear in those AI-generated answers. This is why “prompt tracking” has become the million-dollar question in modern SEO. We are currently in a “business, not science” phase of digital marketing, where we must find creative ways to extract insights from imperfect data sources.
Proof of Concept: When OpenAI Data Leaked into Search Console
The idea that we can find AI prompt data within Google Search Console isn’t just a theory; it is backed by documented “leaks” that occurred recently. In late 2025, digital strategist Jason Packer published a report analyzing a fascinating anomaly: actual ChatGPT user queries were appearing in Google Search Console reports. This wasn’t just a few keywords; it included prompts containing PII (Personally Identifiable Information) and long-form conversational logs.
The story was eventually picked up and confirmed by major outlets like Ars Technica. OpenAI later acknowledged the issue, stating it was a technical glitch that affected a “small number of queries” and has since been patched. However, the significance of this event cannot be overstated. It served as a proof of concept that LLM-driven traffic and the prompts that drive it are capable of being tracked and logged within the traditional search ecosystem.
Furthermore, Google’s own evolution into “AI Mode” (often referred to as Search Generative Experience or AI Overviews) has further integrated these conversational queries into the GSC dashboard. As Google rolls out AI-based features more aggressively, the data from these interactions is increasingly being funneled into the Performance reports we use every day. If you know how to look for it, the data is there.
Accessing AI Mode Data in Google Search Console
Industry experts, including Barry Schwartz, have reported that specific “AI Mode” traffic data is becoming more accessible within Search Console. When analyzing properties over the last several months, many SEOs have noticed a steady rise in impressions that correlate exactly with Google’s rollout of AI-driven search features during the late 2025 and early 2026 period.
The difficulty lies in the fact that Google does not always label these queries as “AI Prompts.” They are mixed in with your standard search data. To find them, we have to look for the “fingerprints” of a prompt: length, complexity, and conversational structure. Traditional search queries are typically short (1-4 words). AI prompts are almost always significantly longer, as the user is providing context and constraints to the machine.
How to Mine Your Search Console for Prompt-Like Queries
To find these prompts, we need to filter out the “noise” of traditional short-tail keywords. The most effective way to do this is by using a Regular Expression (Regex) filter to isolate queries that are 10 words or longer. Here is the step-by-step process to uncover this data in your own GSC profile:
Step 1: Navigate to the Performance Report
Log into Google Search Console and select your property. Go to the “Performance” section and ensure you are looking at the “Search Results” report. It is best to set your date range to the last 3 or 6 months to capture enough data for a meaningful analysis.
Step 2: Apply a Custom Query Filter
Click on the “+ New” button at the top of the report and select “Query.” In the dropdown menu that usually says “Queries containing,” change it to “Custom (regex).”
Step 3: Insert the Regex Code
Copy and paste the following regex into the filter box: ^(?:S+s+){9,}S+$
This specific string tells Google Search Console to only show queries that contain at least 10 words. It looks for a sequence of non-whitespace characters followed by a space, repeated at least nine times, followed by one more word.
Step 4: Analyze the Results
Once you hit apply, the results will likely be astounding. Instead of seeing “SaaS pricing” or “hiking trails,” you will see full-length sentences and complex questions. These are the queries that represent either users treating Google like an LLM or actual conversational data being passed through from AI search interfaces.
What Conversational Search Data Looks Like
When you run this filter on a high-traffic site, the queries you discover often look very different from what we consider “keywords.” They are deeply specific and reveal the exact pain points and desires of your audience. Here are some examples of the types of queries discovered using this method (anonymized for privacy):
Example 1: Travel and Lifestyle
“Map out a full day in Glacier National Park. I’d like to hike a scenic trail, see unique wildlife or natural features, grab a quick bite from a nearby lodge or food stand.”
This isn’t a search; it’s an itinerary request. The user is looking for a curated experience, not just a list of links.
Example 2: B2B Software and Tech
“What are the best email performance and deliverability platforms to help email marketing programs reduce spam placement, filter out low-quality or fake subscribers, and improve inbox placement rates?”
In this case, the user has defined a very specific problem (deliverability) and is looking for a solution that addresses multiple sub-issues simultaneously.
Example 3: Enterprise Analytics
“Which sales enablement intelligence platforms are most widely adopted and cost-effective for enterprise pipeline analytics and buyer engagement insights in France?”
Here, the user has added geographic and budgetary constraints, making it a high-intent prompt that a standard “sales software” keyword would never capture.
Using Claude for Advanced Prompt Analysis
Once you have exported this list of 10+ word queries from Google Search Console, the next step is to make sense of the data. Manually reading through thousands of long-form prompts is inefficient. This is where using an LLM like Claude (by Anthropic) becomes incredibly valuable.
Claude is particularly adept at data analysis and can spot themes and behavioral trends that might not be obvious at first glance. By uploading your GSC export to Claude, you can perform a deep “behavioral analysis” of your search data. This allows you to categorize your audience’s intent into buckets that can inform your content strategy.
Specific Questions to Ask Your Data
When you feed your GSC prompt data into Claude, don’t just ask for a summary. Ask targeted questions that can drive business decisions:
- “What specific characteristics of our product do users mention most frequently in their long-form questions?”
- “Are there recurring themes regarding our competitors? Are we being compared to a specific ‘gold-standard’ brand?”
- “How are users framing their questions—are they asking as ‘beginners’ or ‘consultants’?”
- “Are there any specific geographic or industry-related trends appearing in these prompts?”
Uncovering Hidden Business Insights
Analyzing this conversational data often leads to “lightbulb moments” that traditional SEO data misses. For instance, you might find that users are frequently asking about a PR issue from three years ago that you thought was buried. Or, you might discover that a significant portion of your audience is looking for “cheaper alternatives” to a specific competitor, giving you a clear opening for a comparison landing page.
Another common insight is the “benchmark” effect. You may find that users consistently use one specific company as a reference point (e.g., “What is the Salesforce alternative for small teams in the UK?”). If your brand isn’t the one being used as the benchmark, you now have a clear goal for your brand awareness campaigns.
Generating Tracking Recommendations
The ultimate goal of mining this data is to build a robust prompt tracking system. By seeing how real users phrase their questions, you can move away from “best guesses” and toward data-backed monitoring. You can ask Claude to generate a list of 20-30 “Master Prompts” based on the patterns it found in your Search Console data.
These Master Prompts can then be plugged into AI visibility tracking tools like Profound or Peec. These tools will then monitor how various LLMs (ChatGPT, Gemini, Claude, Perplexity) respond to those specific prompts over time. This allows you to see if your brand is being recommended, if your competitors are gaining ground, or if the AI’s “opinion” of your brand is changing.
Is Prompt Tracking Scientific?
It is important to manage expectations when dealing with AI search data. A notable study by Rand Fishkin found that user prompts are incredibly diverse. When 142 respondents were asked to provide a prompt for the same query, the similarity score was a mere 0.081. This means that two people rarely ask an AI the same question in the exact same way.
Because of this variability, you will likely never be able to track every single prompt that leads a user to your site. However, that doesn’t mean the effort is wasted. As SEO expert Will Critchlow famously noted, we are doing business, not science. The goal isn’t to have a 100% accurate lab report of every prompt; the goal is to identify scalable, common themes that allow you to optimize your content for the way people actually communicate with machines.
Conclusion: The Future of Search Analytics
We are moving into an era of “zero-click” and “low-attribution” search, where the traditional link-based economy is being supplemented by AI-generated answers. In this environment, Google Search Console remains one of our most vital tools, but only if we evolve the way we use it. By applying regex filters to isolate long-tail conversational queries and using LLMs like Claude to analyze the underlying intent, we can turn a “black box” into a strategic roadmap.
Don’t wait for Google or OpenAI to provide a “Prompt Planner” tool. The data is already leaking into your reports. By identifying the specific ways your customers are prompting AI about your brand, you can ensure that when the AI answers, it’s your brand that gets the recommendation.