Uncategorized

Uncategorized

Stop paying for traffic: The enterprise CMO’s guide to ROI-driven SEO

The Death of the Vanity Metric: Why Traffic No Longer Equals Success For years, the standard enterprise SEO reporting call has followed a predictable, and ultimately broken, script. Agencies or internal teams present a slide deck filled with upward-trending line graphs showing organic sessions, impressions, and “keyword reach.” They celebrate a 15% increase in top-of-funnel traffic while the Chief Marketing Officer (CMO) looks at a sales pipeline that remains stubbornly flat. In the current economic climate, this disconnect is no longer sustainable. Marketing budgets are under unprecedented scrutiny, and every dollar must justify its existence through clear, attributable ROI. The hard truth is that optimizing for raw traffic volume is a legacy mindset—one that hides mediocre commercial performance behind a veil of vanity metrics. The new mandate for the enterprise CMO is to transition away from being a “traffic buyer” and toward becoming an “authority builder.” This requires building an acquisition engine that influences buyers and protects the profit and loss (P&L) statement long before a transaction even occurs. To survive as a marketing leader today, you must ruthlessly challenge your teams to stop reporting on operational output and start delivering hard financial accountability. The New Path to Purchase: Why Traffic is Bleeding Your Budget The traditional marketing funnel is being disrupted by a fundamental shift in how consumers and B2B decision-makers find information. Chasing top-of-funnel informational traffic is increasingly becoming a trap. When you pay for content that attracts users looking for general information—users who have no intention of buying—you are effectively subsidizing vanity metrics that do nothing for your bottom line. This shift is driven by the rise of Large Language Models (LLMs) and AI-driven search engines. Buyers now use tools like ChatGPT, Claude, and Perplexity to conduct deep, synthesized research before they ever land on a traditional search engine results page (SERP). By the time a user types a transactional query into Google, they have often already narrowed their choices down to two or three brands. If your brand is not the cited authority during that initial AI-driven research phase, you are invisible by the time the buyer reaches the transactional layer. You aren’t just losing traffic; you’re losing the “mindshare” that dictates the final purchase. The 7.48% Reality: The Power of the Educated Buyer The data reveals a staggering contrast in traffic quality when comparing traditional organic search to AI-driven discovery. Across enterprise client bases, traditional organic search typically converts at a rate of roughly 2.75%. In contrast, traffic originating from AI search citations converts at an average of 7.48%. Why is there such a massive disparity? It comes down to the “trust proxy.” LLMs function as the ultimate validator for today’s consumers. When an AI tool synthesizes dozens of expert reviews, whitepapers, and technical forums to recommend a specific enterprise solution, the user views that recommendation as an objective consensus. By the time a user clicks on an AI citation and arrives at your site, they are no longer “browsing.” They have been armed with data, comparisons, and third-party validation. They are an educated buyer prepared to transact. For a CMO, this means that one visitor from an AI citation is worth nearly three visitors from a standard organic link. From Found to Cited: Architecting the Default Recommendation Capturing that 7.48% conversion rate requires a total evolution of your digital asset strategy. In the legacy SEO model, the goal was to “rank” among a list of blue links. In the new model, the goal is to be “cited” as the definitive option by the algorithms that guide human decision-making. Winning the AI consensus requires you to treat your content creation as structured capital management. You are no longer just “writing blogs”; you are building data-rich assets designed for machine extraction. The Old Way vs. The New Way Consider the difference in approach for an enterprise logistics company: The Old Way: The team spends weeks publishing a 2,000-word blog post on “Top Supply Chain Trends for 2024.” It generates 5,000 monthly visitors, most of whom read the first three paragraphs and bounce. It adds zero value to the pipeline because it is too broad and lacks proprietary depth. The New Way: The company builds a Generative Engine Optimization (GEO) hub. This includes a dedicated supply chain cost calculator with proprietary data tables, expert author schema tagging the lead engineers, and strict “answer-first” formatting. LLMs require verifiable facts and consensus to generate confident answers. By structuring your digital assets with proprietary data and verifiable entities, you become the “default recommendation.” You may only get 500 visitors to this calculator instead of 5,000 to the blog post, but those 500 visitors are high-intent leads who are using your tool to justify a massive enterprise purchase. Strategic ROI: Using Citation Authority to Reduce Ad Spend One of the most significant failures in modern enterprise marketing is the siloed nature of SEO and Paid Media. SEO is often viewed as “free” traffic, while Paid is viewed as “guaranteed” traffic. This division creates massive financial inefficiencies. A sophisticated CMO must treat organic citation authority as a strategic financial lever to reduce overall Customer Acquisition Cost (CAC). When your organic assets dominate the AI Overview or the top of the SERP, your paid team has the opportunity to pull back on defensive ad spend. The IF/THEN Logic of Integrated Search To maximize ROI, your search strategy should follow a strict logical framework: IF your brand is established as the default AI recommendation for a high-cost commercial category, THEN your paid team must aggressively reduce defensive brand bidding. There is no reason to pay for a click on your own brand name if you already own the primary AI citation and the top organic result. This slashes the overall Cost Per Acquisition (CPA). IF paid search data identifies a highly profitable long-tail query with high conversion rates, THEN the SEO team must prioritize building a structured, data-heavy asset to capture that demand organically. This ensures that you don’t have to keep paying

Uncategorized

Google Search Ads in 2026 require a different kind of audit

The landscape of digital advertising is undergoing a seismic shift. As we look toward the horizon of 2026, the traditional methods of auditing Google Search Ads are no longer just becoming dated—they are becoming obsolete. The emergence of sophisticated AI-driven campaign types, the push for massive campaign consolidation, and the transition from manual controls to “indirect” signals have fundamentally changed the relationship between advertisers and the Google Ads platform. Brandon Ervin, Director of Product Management for Google Search Ads, recently appeared on Google’s Ads Decoded podcast to discuss these very shifts. The conversation touched on the evolution of “AI Max” (the next iteration of Performance Max), the necessity of campaign consolidation, and the future of advertiser control. While Ervin presented a vision of a platform that is more intuitive and powerful than ever, there remains a significant disconnect between Google’s product vision and the boots-on-the-ground reality experienced by media buyers and performance marketers. To succeed in 2026, an audit cannot simply be a checklist of settings. It must be an economic evaluation of how value is being distributed across your account. If you are still auditing your accounts using 2020 frameworks, you are likely missing the “value redistribution” that is quietly eroding your profit margins. The Paradox of “New” Controls: Innovation or Restoration? Google has introduced several updates recently that are aimed at giving advertisers more “control” over automated systems. On the surface, these look like major wins for the community. These updates include: Brand exclusions within Performance Max and Demand Gen campaigns. The ability to exclude site visitors and existing customers from PMax. Improved network-level reporting within bundled campaigns. Enhanced visibility into search terms. Brand and geographic controls at the ad group level within AI Max. Semantic modeling that reduces the “learning period” risk during campaign consolidation. While these are indeed helpful tools, a rigorous 2026 audit must view them through a critical lens. Many of these “innovations” are actually just the restoration of features that were standard before the aggressive push toward automation began. For example, the ability to separate brand from non-brand traffic was a fundamental setting for a decade. When Google removed that clarity in early iterations of PMax, it created a transparency gap. Reintroducing it years later is not necessarily a step forward; it is a restoration of a baseline that should never have been removed. An effective audit today must determine whether you are utilizing these tools to reclaim lost control or if you are still operating in the “black box” era of 2022-2024. Establishing the 2026 Table Stakes Before diving into the high-level economic audit, every account must have its fundamentals in order. In 2026, these are considered “table stakes.” If your account fails these basics, the more advanced AI models will have no foundation to build upon. The Foundational Checklist Your audit should first verify that the following are active and optimized: Full Ad Extensions: Sitelinks, callouts, structured snippets, images, and call extensions must be fully populated to maximize the “real estate” your ad occupies on the SERP. Intentional Automated Bidding: While manual bidding is nearly extinct, automated bidding must be governed by intentional targets (tCPA or tROAS) that align with actual business margins. Negative Keyword Hygiene: Even with broad match dominance, negative keyword lists remain your primary tool for preventing budget waste. Creative Relevance: Ads must be dynamically relevant to the queries they serve. This means using RSA (Responsive Search Ads) effectively with high-quality assets. Asset Auditing: Regularly review automatically created assets. Google’s AI is getting better at generating headlines and descriptions, but it can still produce brand-unsafe or inaccurate copy. Channel Exclusion: For most pure search campaigns, cutting Search Partners and Display expansion remains a best practice to ensure your budget stays focused on high-intent searchers. The Shift to Downstream Signals The most important part of the 2026 foundation is your data feedback loop. You must move beyond surface-level conversion tracking (like “Form Fills” or “Add to Carts”). To feed the Google AI what it actually needs, you must import offline conversion data. This includes Marketing Qualified Leads (MQLs), Sales Qualified Leads (SQLs), actual revenue, and even Customer Lifetime Value (CLV). If the algorithm only sees “leads” but doesn’t see which leads turn into “revenue,” it will optimize for the cheapest, lowest-quality leads it can find. Core Pillar 1: Signal Architecture In the *Ads Decoded* podcast, Brandon Ervin argued that “control still exists, it just looks different.” This is a crucial takeaway for any 2026 audit. We have moved from “Direct Controls” (exact match keywords, device modifiers, manual bids) to “Indirect Controls” (data quality, signal density, and signal selectivity). In the past, you told Google exactly what to do. Today, you tell Google what you value, and the AI decides how to get it. Therefore, your audit must focus on the architecture of those signals. Quality vs. Surface Conversions Are you passing revenue and pipeline data back to Google? If you are a B2B company and you aren’t passing “Closed-Won” data back into the system, your AI Max campaigns are essentially flying blind. An audit should map out exactly which conversion actions are being used for “Primary” optimization and whether those actions correlate with actual profit. Density and Learning AI models require a certain volume of data to function. If your campaigns are too fragmented (the “anti-consolidation” approach), you won’t have enough conversion density for the model to learn. However, if you consolidate too much, you lose the ability to differentiate between high-value and low-value segments. The 2026 audit must find the “Goldilocks zone” of campaign structure: enough data to fuel the AI, but enough segmentation to maintain business logic. Selectivity Are you passing everything to Google indiscriminately? A high-performing account in 2026 is selective. This might mean only passing net-new customer data or weighting high-value customers more heavily than one-time buyers. You influence the algorithm by being picky about the data you feed it. Core Pillar 2: The Incrementality Challenge Google’s optimization engine is designed to maximize *reported*

Uncategorized

Google leaves door open to ads in Gemini

The landscape of digital advertising is on the precipice of its most significant transformation since the invention of the search engine. For decades, Google has dominated the global market by perfecting the art of placing the right ad in front of the right person at the moment of intent. However, as the world pivots toward generative AI, the traditional “ten blue links” model is being challenged by conversational interfaces like Gemini. For months, the industry questioned how Google would monetize this new frontier without alienating its massive user base. Now, we have a clearer answer: the door is officially open. Recent statements from high-ranking Google executives signal a pivot in the company’s long-term strategy for Gemini. While earlier rhetoric suggested a cautious, almost hands-off approach to advertising within the AI chatbot, the narrative has shifted toward integration. This evolution marks a critical moment for marketers, tech enthusiasts, and the broader digital economy, as the world’s most powerful advertising engine prepares to merge with its most advanced artificial intelligence. The Shift from “No Plans” to “When, Not If” To understand the current trajectory, we must look back at the beginning of 2024. In January, during the World Economic Forum in Davos, Google DeepMind CEO Demis Hassabis provided a relatively firm stance on the matter. At the time, Hassabis told reporters that Google had no immediate plans to introduce advertising into the Gemini experience. This was seen as a way to prioritize user trust and refine the core technology before cluttering the interface with commercial content. However, the corporate stance has matured. In a recent interview, Nick Fox, Google’s Senior Vice President of Search, signaled a notable departure from that hardline denial. Fox indicated that while Google is still being deliberate, they are “not ruling out” the inclusion of ads within Gemini. This shift suggests that the conversation at Google has moved from the philosophical question of “should we?” to the practical question of “how and when?” For a company that generated over $400 billion in revenue in 2025, the majority of which stems from its advertising ecosystem, the eventual monetization of its flagship AI product was perhaps inevitable. The “prioritization question,” as Fox frames it, implies that the infrastructure for AI-based advertising is already being conceptualized behind closed doors. AI Mode: The Testing Ground for Future Ad Formats Google is not diving headfirst into Gemini ads without data. Instead, the company is utilizing its “AI Mode”—the Gemini-powered features integrated directly into Google Search—as a sophisticated sandbox. By testing ad formats within AI-generated search summaries (often referred to as AI Overviews), Google can observe user behavior and ad performance in a controlled environment before migrating those learnings to the standalone Gemini app. The current strategy in AI Mode focuses on three primary pillars: 1. Strict Separation and Clear Labeling One of the primary concerns with AI-generated content is the potential for “hallucinations” or biased information. To maintain credibility, Google ensures that ads are kept distinct from organic AI responses. These placements are clearly labeled as “Sponsored” or “Ads,” adhering to long-standing transparency standards. This distinction is vital for maintaining user trust in a conversational environment where the line between a recommendation and an advertisement can easily blur. 2. Extreme Relevance or Nothing In a traditional search result page, showing a “close enough” ad might still yield a click. In a conversational AI experience, an irrelevant ad feels intrusive and disruptive. Google has stated that it only serves ads in AI Mode when they are highly relevant to the specific query. If the AI determines that no commercial partner perfectly fits the user’s intent, it simply doesn’t show an ad. This “quality over quantity” approach is designed to prevent the AI from feeling like a telemarketing tool. 3. Leveraging Two Decades of Search Expertise Google isn’t starting from scratch. The company is drawing on more than 20 years of data regarding user intent, click-through rates, and auction dynamics. This historical data allows Google to predict with high accuracy which commercial interactions will be helpful to a user in a conversational flow. By the time ads officially land in the Gemini app, they will likely be powered by the most sophisticated relevance engine ever built. Monetization Pressures: Google vs. OpenAI The timing of Google’s shift in rhetoric is not accidental. The competitive landscape for generative AI is heating up, and the pressure to monetize is mounting across the industry. However, Google’s position is vastly different from that of its primary rival, OpenAI. OpenAI, despite its massive valuation and cultural impact, is under significant pressure to scale its revenue. Recent reports suggest the company is aiming to more than double its $30 billion revenue target. To achieve this, OpenAI has already begun testing ads in the free tier of ChatGPT. For OpenAI, advertising is a necessary survival mechanism to offset the astronomical costs of training and running large language models (LLMs). Google, by contrast, has the “luxury of patience.” With a revenue stream exceeding $400 billion, Google can afford to lose money on Gemini in the short term to ensure the user experience is perfected. This allows Google to watch OpenAI’s missteps and refine their own ad delivery system. But while Google has the luxury of time, they cannot wait forever. As users shift their search habits from standard queries to AI conversations, Google must ensure its revenue model shifts along with them. The “Personal Intelligence” Factor: The Holy Grail of Targeting One of the most intriguing aspects of Nick Fox’s recent insights involves “Personal Intelligence.” This refers to Gemini’s ability to integrate with a user’s personal Google ecosystem, including Gmail, Google Photos, and Google Calendar. By understanding a user’s schedule, their upcoming travel plans, and their personal preferences, Gemini becomes more than a chatbot—it becomes a digital assistant. Fox described this level of personalization as the “holy grail” for Search. If this personal data layer eventually informs the broader search and ad experience, the implications for advertisers are staggering. Imagine an AI that

Uncategorized

Old Link Building vs. AI Search: How to Earn Top-Tier Media Placements Now via @sejournal, @Michael_Resolve

The Evolution of Search: Why Traditional Link Building Is Falling Behind For nearly two decades, the backbone of Search Engine Optimization (SEO) was a relatively straightforward formula: create content, identify keywords, and acquire as many backlinks as possible. In the early days, quantity often outweighed quality. As Google’s algorithms matured, the focus shifted toward relevance and authority. However, we are currently witnessing the most significant shift in the history of the internet: the transition from traditional search engines to AI-driven discovery engines. The rise of Generative AI, Large Language Models (LLMs), and AI-integrated search results—such as Google’s AI Overviews and ChatGPT Search—has fundamentally altered how information is indexed and presented. In this new landscape, the “old” methods of link building, such as directory submissions, low-tier guest posting, and transactional link exchanges, are not just losing effectiveness; they may actually be hindering a brand’s ability to appear in AI-generated answers. To thrive in this environment, marketers and SEO professionals must pivot toward a strategy that prioritizes brand legitimacy and digital PR. The goal is no longer just to “get a link,” but to earn a place within the knowledge graphs that power modern AI. This requires a sophisticated approach to top-tier media placements that verify a brand’s authority to both human readers and machine learning algorithms. Understanding the Shift from Links to Entities To understand why traditional link building is struggling, we must understand how AI search differs from traditional Boolean or keyword-based search. Traditional search engines looked for “strings”—specific sequences of characters. If a website had the right keywords and enough backlinks with matching anchor text, it ranked well. AI search engines, however, look for “entities.” An entity is a well-defined concept or object, such as a person, a place, or a brand. AI models use a process called “semantic mapping” to understand the relationship between these entities. When an AI provides a response to a user query, it isn’t just looking for a page with high PageRank; it is looking for the most “trusted” source of information regarding a specific entity. In this context, a link from a high-authority, top-tier media outlet acts as a massive signal of legitimacy. It tells the AI that your brand is a recognized authority within its niche. This is why a single mention in a publication like The Wall Street Journal or Wired is now worth more than a thousand links from obscure, mid-tier blogs. The former builds entity authority; the latter merely inflates a metric that AI is increasingly trained to ignore. The Decline of the Transactional Link Building Model The “old” link-building model was largely transactional. SEOs would reach out to webmasters, often offering content or payment in exchange for a link. This led to a cluttered ecosystem of “guest post sites” that exist solely to sell links. Google has become incredibly adept at identifying these patterns, often devaluing these links entirely or, in worse cases, penalizing the sites involved. AI search takes this a step further. Because LLMs are trained on massive datasets of human language, they can distinguish between natural editorial citations and forced, artificial link placements. AI models prioritize “consensus.” If multiple high-authority news organizations and industry journals are talking about a brand in a specific context, the AI accepts that brand as a factual authority. Transactional links from low-quality sources do not contribute to this consensus; they are filtered out as noise. Why Top-Tier Media Placements Are the New Gold Standard Earning placements in top-tier media has always been a goal for public relations professionals, but it is now a critical requirement for SEO. These placements serve three primary functions in the age of AI search: 1. Validating E-E-A-T Signals Google’s focus on Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T) is more prominent than ever. When an expert from your company is quoted in a major publication, or when your brand’s original research is cited by a reputable news desk, it provides the ultimate validation of E-E-A-T. AI models use these citations to verify that the information you provide is accurate and backed by real-world authority. 2. Feeding the AI Training Sets LLMs are trained on the “Common Crawl” and other massive repositories of internet data. However, not all data is weighted equally. Developers of AI models prioritize high-quality, edited, and fact-checked content. By securing placements in top-tier media, you ensure that your brand’s name and expertise are included in the high-quality datasets that future AI models will use to “learn” about your industry. 3. Driving Referral Traffic and Brand Awareness While the SEO benefits are paramount, we cannot overlook the traditional value of media placements. Top-tier outlets have massive, engaged audiences. A single well-placed article can drive thousands of qualified leads to your site. In an era where AI might provide the answer directly on the search results page (zero-click searches), having a strong brand that people recognize and search for by name is a vital safeguard. Strategies for Earning Top-Tier Media Placements Moving away from old link building requires a new toolkit. You cannot “buy” your way into the New York Times; you have to earn your way in. This process, often called Digital PR, involves several key strategies. Original Data and Proprietary Research Journalists are always looking for new, interesting data to support their stories. If your company has access to unique data points, you can package this into a research report or a white paper. By providing journalists with “the first look” at a new trend or statistic, you provide immense value. When they write about your findings, they will almost certainly cite your brand as the source, creating a high-authority link and a strong entity signal for AI. The “Expert Source” Methodology News moves fast. When a major event happens in your industry, journalists need expert commentary immediately. By positioning your C-suite executives or lead researchers as “on-call” experts, you can secure mentions in breaking news stories. Platforms like Connectively (formerly HARO) or Featured.com are useful, but direct relationship building with journalists

Uncategorized

Google AI Overviews cut search clicks 42%: Report

The Changing Landscape of Google Search The digital publishing world is currently navigating one of the most significant shifts in the history of the open web. For decades, the relationship between Google and publishers was relatively symbiotic: publishers provided the content, and Google provided the audience through organic search results. However, the introduction and aggressive expansion of Google’s AI Overviews (AIO) has fundamentally altered this dynamic. According to a comprehensive new report from Define Media Group, the impact is no longer theoretical—it is measurable, and for many, it is stark. The report reveals that organic search clicks have plummeted by 42% since the broader rollout of AI-generated summaries. This decline represents a massive redistribution of traffic that threatens traditional SEO strategies while simultaneously opening new, albeit different, doors for growth. As Google transforms from a “search engine” that directs users to websites into an “answer engine” that provides information directly on the results page, the industry is witnessing a pivot toward real-time reporting and feed-based discovery. A Deep Dive into the Numbers: The 42% Decline To understand the gravity of these findings, it is essential to look at the dataset provided by Define Media Group. The analysis drew from Google Search Console data across a diverse portfolio of 64 high-traffic websites. This wasn’t a small sample size; the baseline traffic for these sites was substantial, providing a clear window into how user behavior has changed since AI became the centerpiece of the search experience. From the first quarter of 2023 through the first quarter of 2024, organic search traffic for this portfolio was stable, averaging approximately 1.7 billion clicks per quarter. This period serves as the “pre-AI” baseline. The disruption began almost immediately after the initial launch of AI Overviews. Upon the first implementation, search traffic saw an immediate 16% dip. Unlike previous algorithm updates where traffic might fluctuate and then stabilize, this traffic never recovered to its original levels. The situation intensified in May 2025, when Google significantly expanded the footprint of AI Overviews. This expansion meant that more queries across a wider range of categories were being met with an AI-generated summary at the top of the page. By the fourth quarter of 2025, the cumulative loss was staggering: organic search clicks had dropped by a total of 42% compared to the pre-AI baseline. This trend suggests that as Google refines its AI, the “zero-click” search—where a user finds their answer without ever leaving Google—is becoming the new standard for informational queries. The Polarization of Content: Evergreen vs. Breaking News While the overall 42% drop is alarming, the report highlights that the pain is not being felt equally across all types of content. The data reveals a sharp polarization between “evergreen” or informational content and breaking news. Evergreen content, which includes how-to guides, definitions, and general information, has historically been the bread and butter of long-term SEO. Unfortunately, this is exactly the type of content that AI Overviews are best at summarizing. When a user asks “How to change a tire” or “What is the capital of Kazakhstan,” the AI can provide a concise, accurate answer sourced from the web, removing the need for the user to click on a specific article. Consequently, publishers who rely heavily on “how-to” and general knowledge traffic are seeing their search referrals evaporate. Conversely, the report found a remarkable surge in traffic for breaking news. From November 2024 through early 2026, breaking news traffic grew by 103%. This suggests that while Google is comfortable using AI to answer static questions, it is still leaning heavily on traditional publishers to provide real-time updates on developing stories. For the news industry, the “Top Stories” carousel remains a vital lifeline, often appearing in place of, or more prominently than, AI summaries during major events. Google Discover: The New Lifeblood of Web Traffic As traditional web search traffic declines, a new hero has emerged for publishers: Google Discover. The Define Media Group report indicates that Discover traffic grew by 30% across their portfolio during the same period that search clicks were falling. Perhaps the most significant finding in the report is that, for the first time, Discover and traditional web search now drive roughly equal amounts of traffic for many major publishers. Google Discover operates differently than Search. While Search is intent-based—meaning a user is looking for something specific—Discover is interest-based. It pushes content to users based on their browsing history and preferences through a feed on mobile devices. This “push” model is proving to be more resilient to AI disruption than the “pull” model of traditional search. The growth in Discover traffic appears to be a deliberate part of Google’s ecosystem shift. As the company uses AI to satisfy specific queries, it is using Discover to keep users engaged with a curated stream of fresh content. For publishers, this means that “optimizing for Discover”—which involves high-quality imagery, engaging headlines, and timely topics—is now just as important, if not more so, than traditional keyword-based SEO. Why AI Overviews Shy Away from Real-Time News One of the most intriguing aspects of the report is the low frequency with which AI Overviews appear for news-related queries. Data from Ahrefs cited in the report shows that AI Overviews appeared for only about 15% of news queries. This is nearly three times less often than in categories like health, science, or technology, where the information is often more factual and less time-sensitive. There are several logical reasons why Google is exercising caution with AI in the news space: 1. The Risk of Hallucination Generative AI models are prone to “hallucinations”—confidently stating facts that are incorrect. In the context of breaking news, where details change by the minute, the risk of providing a false summary is high. Google likely views the “Top Stories” carousel as a safer alternative, as it attributes information directly to trusted news brands rather than generating its own interpretation. 2. High Accuracy Stakes For topics like international conflicts, political developments, or public safety, the stakes for

Uncategorized

B2B Buyers Trust Peers Over AI Chatbots, Report Finds via @sejournal, @MattGSouthern

The Evolving Landscape of B2B Decision-Making The rapid integration of artificial intelligence into the business world has promised a revolution in efficiency, data processing, and customer interaction. From automated lead nurturing to 24/7 customer support chatbots, AI is everywhere. However, a recent report focusing on B2B decision-makers has revealed a significant disconnect between the availability of AI tools and the trust buyers place in them. According to the findings, B2B buyers trust peer recommendations nearly twice as much as they trust information provided by AI chatbots. This revelation highlights a critical human element that remains immovable despite the technological shift: the value of lived experience. While AI can process billions of data points in seconds, it currently lacks the professional credibility and accountability that comes from a colleague or industry peer who has navigated similar challenges. This shift in trust dynamics is reshaping how companies approach their marketing and sales funnels. It suggests that while AI is an excellent tool for productivity, it is not yet viewed as a reliable source for high-stakes decision-making. For marketers and business leaders, understanding this gap is essential for building a strategy that resonates with modern buyers who are increasingly skeptical of automated narratives. The Power of Peer Recommendations: Why Human Connection Wins In the B2B sector, the stakes are high. Purchases often involve six-figure budgets, multi-year contracts, and significant organizational changes. When a decision-maker chooses a new software platform or a professional service provider, their professional reputation is on the line. The report indicates that peer recommendations are the gold standard for trust. This is likely due to several key factors that AI cannot currently replicate: Accountability and Risk Mitigation When a peer recommends a product, they are staking their own credibility on that recommendation. If a colleague tells you that a specific CRM transformed their sales pipeline, you trust that information because they have no ulterior motive other than professional courtesy. In contrast, an AI chatbot is perceived as a tool programmed by the vendor, inherently carrying a bias toward the product it represents. Shared Context and Industry Nuance Peers understand the specific “pain points” of an industry. They know the regulatory hurdles, the integration headaches, and the cultural shifts required to implement new technology. A chatbot might provide a technical summary of a product’s features, but a peer can explain how those features actually perform during a high-stress quarterly audit or a massive data migration. The Rise of “Dark Social” Much of this peer-to-peer influence happens in what marketers call “Dark Social”—private Slack channels, closed LinkedIn groups, and face-to-face networking events. These are environments where AI cannot reach and where traditional tracking metrics fail. The report’s findings confirm that these private conversations carry more weight than any public-facing AI interface or marketing collateral. The Skepticism Surrounding AI Chatbots While AI chatbots have become more sophisticated with the rise of Large Language Models (LLMs), the B2B community remains wary. The report’s finding that trust in AI is significantly lower than trust in peers points to several systemic issues within the current state of AI technology. The Problem of Hallucinations and Accuracy One of the biggest hurdles for AI in B2B sales is the risk of “hallucinations”—instances where the AI confidently provides incorrect information. In a B2B context, where technical specifications and contract terms must be precise, a single piece of misinformation can derail a deal or lead to a costly mistake. Buyers are aware of these limitations and are therefore hesitant to rely on AI for critical research. The Lack of Transparency B2B buyers often want to know the “why” behind a recommendation. AI chatbots, particularly those built on proprietary models, often function as a “black box.” It is difficult for a user to trace how the AI reached a specific conclusion or whether the information is being filtered to favor the vendor’s most profitable packages. Without this transparency, trust remains elusive. The “Human Touch” in Complex Negotiations The B2B buying journey is rarely linear. It involves negotiation, customization, and relationship building. Chatbots excel at answering frequently asked questions, but they struggle with the nuances of a complex negotiation. Buyers feel more comfortable talking to someone who can empathize with their specific situation, a trait that AI, by its very nature, can only simulate. The Decline of the Traditional White Paper Perhaps the most surprising finding in the report is the ranking of white papers. Once considered the cornerstone of B2B content marketing, white papers now rank last for perceived value among decision-makers. This marks a significant shift in how professionals consume information and signifies the end of an era for “gated content” as a primary lead generation tool. Information Overload and Time Constraints Modern B2B buyers are busier than ever. The traditional 20-page white paper, filled with dense jargon and lengthy case studies, is often seen as a chore rather than a resource. Buyers are moving toward “snackable” content—short videos, interactive tools, and concise executive summaries that provide immediate value without requiring a significant time investment. Perceived Bias and Sales Intent Over the years, the quality of white papers has become inconsistent. Many have transitioned from objective, research-based documents into glorified sales brochures. Buyers have become savvy to this; they see a white paper as a biased document designed to push them toward a specific solution rather than an educational tool. This skepticism has driven the perceived value of the format to an all-time low. The Shift to Real-Time Data In a fast-moving tech economy, a white paper published six months ago might already be obsolete. Buyers are looking for real-time insights, live webinars, and dynamic data visualizations. Static PDFs simply cannot compete with the immediacy of social media discussions or live-updated industry benchmarks. Strategies for B2B Marketers in a Peer-Driven Market The report’s findings serve as a wake-up call for B2B organizations. If buyers trust peers over AI and value white papers the least, marketers must pivot their strategies to focus on community, advocacy, and authentic engagement. Prioritizing Customer

Uncategorized

How To Build An AI SEO Strategy That Outlasts Tactics via @sejournal, @Kevin_Indig

Understanding the Shift: Why Tactics Alone Fail in the AI Era The search engine optimization landscape is currently undergoing its most significant transformation since the introduction of the first ranking algorithms. With the integration of Large Language Models (LLMs) into search results through Google’s AI Overviews (formerly SGE), Bing Chat, and conversational engines like Perplexity, the old playbook is being rewritten. Many digital marketers are responding to this shift by scrambling for quick fixes—tactics like mass-producing AI content or attempting to “hack” the latest update. However, tactics are temporary. A strategy built solely on tactics is fragile and prone to collapse whenever a search engine updates its core algorithm. To succeed in the modern era, brands must move beyond a “tactic-first” mentality. An AI SEO strategy that outlasts tactics is one built on a foundation of data, user intent, and brand authority. It recognizes that while the tools for content creation and technical optimization have changed, the fundamental goal remains the same: providing the most valuable, authoritative, and accessible answer to a user’s problem. This guide explores how to build a durable AI SEO strategy that remains effective even as the underlying technology evolves. The Difference Between Tactics and Strategy in AI SEO Before diving into the framework, it is essential to distinguish between a tactic and a strategy. A tactic is a specific action taken to achieve a small, immediate goal. Examples include using an AI writing tool to generate meta descriptions or using a scraper to find keyword gaps. While useful, these actions are easily replicated by competitors and offer no long-term competitive advantage. A strategy, on the other hand, is a high-level plan that coordinates your resources to achieve a long-term vision. An AI SEO strategy focuses on how your brand will position itself within the AI-driven information ecosystem. It considers how LLMs crawl data, how they cite sources, and how human behavior changes when interacting with chat interfaces. A durable strategy focuses on building “moats”—unique advantages that AI cannot easily replicate, such as proprietary data, unique brand voice, and deep topical authority. Pillar 1: Answer Engine Optimization (AEO) and the Information Gain Model Search engines are no longer just lists of links; they are “Answer Engines.” This shift toward Answer Engine Optimization (AEO) requires a rethink of how content is structured. AI models are trained to synthesize information from multiple sources to provide a single, cohesive answer. To stay relevant, your content must be structured in a way that these models can easily parse and cite. Prioritizing Information Gain In a world where AI can summarize the top ten search results in seconds, “regurgitated” content has zero value. If your article says the same thing as every other article on the web, an AI model will summarize the consensus and likely omit a link to your site. To survive, you must provide “Information Gain.” This is the addition of new, unique information that isn’t found elsewhere. This could include original research, case studies, personal experience, or a contrarian viewpoint backed by data. Information gain is what makes your content “citeable” by an AI engine. Structuring for Fragmented Retrieval AI models often retrieve information in chunks rather than reading entire pages. A durable strategy involves optimizing these chunks. Use clear, descriptive subheadings, bulleted lists for technical specifications, and concise “TL;DR” summaries at the beginning of long-form pieces. By making your information modular, you increase the likelihood that an AI assistant will extract your specific data point or quote for its answer. Pillar 2: Technical SEO for a Machine-Learning World The technical side of SEO has evolved from simple indexing to ensuring “data readiness.” If search engines are the engines, data is the fuel. If your site’s data is messy, AI will struggle to interpret it correctly. The Role of Structured Data (Schema.org) Schema markup has never been more important. It serves as a translator between your human-readable content and the machine-readable requirements of LLMs. By using advanced schema—such as Product, Organization, Person, and FAQ—you provide explicit context that helps AI understand the relationships between different entities on your site. This reduces the “hallucination” risk for the AI and increases the chances of your brand being featured in rich snippets and AI-generated overviews. Managing Crawl Budgets for LLM Bots With the rise of bots like GPTBot, CCBot, and others, managing your crawl budget and permissions is a strategic necessity. A durable strategy involves making intentional decisions about which parts of your site should be accessible to AI crawlers. While blocking all AI bots might protect your intellectual property, it could also lead to your brand being invisible in conversational search results. A balanced strategy involves allowing access to high-value informational pages while protecting proprietary tools or sensitive data via robots.txt and advanced header tags. Pillar 3: Authority and E-E-A-T in the Age of Generative AI Google’s emphasis on Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T) is a direct response to the flood of low-quality AI content. A durable AI SEO strategy leans heavily into the “Experience” aspect. AI can synthesize information, but it cannot “experience” a product, a location, or a process. Showcasing First-Person Expertise To outlast tactics, your content must emphasize the human element. This means using phrases like “In our testing,” “Our team found,” or “Based on my 10 years in the industry.” Highlighting real people behind the content—complete with detailed author bios, links to social profiles, and a history of published work—creates a trust signal that AI-generated sites cannot replicate. This “human-in-the-loop” approach ensures that even if AI helps write the draft, the expertise is authentically human. Building a Brand Moat Brand searches are the most resilient form of traffic. If a user asks an AI, “What is the best CRM for small businesses?” the AI might list several options. If the user asks, “How do I set up a workflow in Salesforce?” they are already in an ecosystem. A strategy focused on brand building ensures that you are the destination, not just a source

Uncategorized

Why most video ads fail — and what video metrics actually matter

Why most video ads fail — and what video metrics actually matter Video advertising has entered an era of unprecedented accessibility. Today, a brand can launch a global campaign across YouTube, Instagram, TikTok, and Connected TV (CTV) with little more than a credit card and a high-speed internet connection. Platforms have perfected the art of distribution, delivering billions of impressions and views to nearly every demographic on the planet. For many marketers, the sheer scale of modern reach feels like a guaranteed win. However, there is a growing disconnect between distribution and effectiveness. While digital dashboards are glowing with green arrows indicating millions of views and high completion rates, those numbers often fail to translate into actual business results. We are seeing a paradox where campaigns generate massive platform engagement but produce almost no measurable impact on brand preference, search volume, or sales. The reality is that while it has never been easier to get a video seen, it has never been harder to get a video to matter. The failure of most video ads isn’t typically a failure of targeting or budget. It is a strategic failure rooted in a misunderstanding of what makes a viewer stop, listen, and remember. To fix the broken model of video advertising, we must move beyond vanity metrics and understand the nuanced relationship between creative execution and human psychology. Most video ads fail because they misunderstand attention The most common mistake in modern video advertising is treating digital platforms like traditional television. In the golden age of TV, the audience was essentially captured. If you were watching a show, you were likely to sit through the commercial break. Distribution was the primary hurdle; if you could afford the airtime, you had the audience’s attention by default. In the digital world, distribution is a commodity, but attention is the scarcest resource on earth. Today’s viewers are not a captive audience. Whether they are scrolling through a social feed, waiting for a YouTube video to start, or watching a streaming service, they arrive with specific intent and established habits. They are looking for entertainment, education, or connection—not your sales pitch. Every ad is an interruption of that intent. When we plan for reach, we are simply buying the right to interrupt. But when we plan for relevance, we are earning the right to stay. Many marketing meetings focus on “impressions delivered.” This is a dangerous trap. An impression is merely a technical confirmation that a file was served on a screen. It says nothing about whether a human being looked at it, processed the information, or felt an emotional response. When there is no connection between high views and downstream business metrics like search lift or site engagement, the campaign has failed to bridge the gap between “seen” and “absorbed.” The first five seconds are the entire negotiation The introduction of the “Skip” button changed the fundamental nature of advertising. It turned every ad into a high-stakes negotiation. If you haven’t given the viewer a reason to stay within the first few seconds, the negotiation is over, and the skip button is pressed. Yet, many advertisers still produce ads that bury the hook at the end of a long, cinematic buildup. Early in the digital transition, common wisdom suggested putting branding front and center. Marketers would open with a high-resolution logo, polished product shots, and professional music cues to signal brand authority. While these ads look impressive in a boardroom presentation, they often trigger a “reflexive skip” in the real world. As soon as a viewer sees a corporate logo or a traditional commercial aesthetic, their brain identifies it as “not what I came for” and begins looking for the exit. Successful video ads treat the first five seconds like a headline in a newspaper. You don’t lead with the author’s name; you lead with the story. The opening frame must present a recognizable problem, a provocative question, or an unexpected visual that disrupts the scroll. The goal is to create “cognitive friction”—something that makes the brain pause its autopilot mode to investigate what it’s seeing. In brand lift analyses, we often find that the majority of an ad’s impact occurs before the skip button even appears. If you don’t win the first five seconds, the remaining fifty-five seconds are irrelevant. High-performing ads often delay the hard branding in favor of a narrative hook, earning the viewer’s attention before revealing the messenger. Higher production value often correlates with lower performance One of the most jarring lessons for traditional creative directors is that “expensive” does not always mean “effective.” In fact, on platforms like TikTok, Reels, and YouTube, overly polished studio content frequently underperforms compared to scrappier, more authentic-looking video. This is because modern audiences have developed a “filter” for traditional advertising. When a video looks like it was made by a professional agency with a six-figure lighting budget, it immediately signals “advertisement.” Digital audiences crave authenticity. They respond to content that feels like it was created by a peer rather than a corporation. This is why phone-shot testimonials or simple, direct-to-camera explanations often drive higher engagement and conversion than cinematic masterpieces. The goal isn’t to look cheap or amateurish; the goal is to look native to the platform. An ad on TikTok should look like a TikTok. An ad on LinkedIn should respect the professional visual grammar of that feed. Algorithms reinforce this behavior. Social media algorithms prioritize watch time and retention. When a user sees a video that looks like an organic post from a friend or an influencer, they are more likely to watch the first few seconds. If the content is valuable, they stay. If it looks like a TV commercial that was simply resized for a phone, they swipe away instantly. Performance declines when brands try to “upgrade” their visual identity at the expense of platform-native authenticity. Designing for the sound-off environment A significant portion of mobile video is consumed without sound. If your ad relies entirely on a voiceover

Uncategorized

AI Max increases revenue 13% but drives higher CPA: Study

The Evolution of Search: Understanding the AI Max Shift For over two decades, Google Ads was a game of syntax. Digital marketers spent countless hours refining keyword lists, obsessing over match types, and sculpting negative keyword lists to ensure their ads appeared for the most relevant queries. However, we are currently witnessing the sunset of that era. Google is aggressively moving toward a future defined by intent rather than specific phrasing, and the spearhead of this movement is AI Max. AI Max represents more than just a minor feature update; it is a fundamental reimagining of how Search campaigns function. By integrating the automation logic found in Performance Max (PMax) directly into the core of Search, Google is attempting to bridge the gap between traditional keyword-based targeting and fully automated, intent-based bidding. But as a recent study reveals, this transition comes with significant financial implications that every advertiser must understand. The Data Speaks: Growth vs. Efficiency The core dilemma of AI Max is encapsulated in a recent analysis conducted by Mike Ryan of Smarter Ecommerce. After auditing more than 250 campaigns, the data paints a complex picture of what happens when advertisers hand the keys over to Google’s latest AI tool. The study found that while AI Max is undeniably effective at driving top-line growth, that growth often comes at a steep price. The median results from the analysis show a 13% increase in revenue for campaigns utilizing AI Max. For many brands, a double-digit jump in revenue is a clear victory. However, the efficiency metrics tell a different story. During the same period, the median Cost Per Acquisition (CPA) rose by 16%. When costs rise faster than revenue, profit margins naturally tighten, creating a situation where advertisers are essentially paying more to acquire the same—or slightly more—volume. Furthermore, the Return on Ad Spend (ROAS) showed a staggering range of volatility. In some successful implementations, ROAS improved by as much as 42%. In others, it plummeted by 35%. This variance suggests that AI Max is not a “set it and forget it” solution, but rather a high-stakes tool that requires careful monitoring and strategic deployment. What Exactly Is AI Max? To understand why these performance swings occur, we must look at what AI Max actually does. It isn’t a new campaign type in the way PMax was; instead, it is a suite of three core automated features designed to expand the reach of existing Search campaigns. 1. Search Term Matching This is perhaps the most significant change. AI Max pushes beyond traditional keyword syntax. It utilizes broad match expansion coupled with “keywordless” targeting. Essentially, Google’s algorithms analyze the content of your landing pages and the intent of a user’s search query to serve an ad, even if that query doesn’t contain a single keyword from your ad group. It focuses on the “why” behind the search rather than the “what.” 2. Text Customization AI Max takes dynamic search ads to the next level by automatically generating and testing ad copy. By analyzing what performs best for specific user segments, the system can customize headlines and descriptions in real-time. The goal is to maximize relevance for the individual user, theoretically increasing click-through rates (CTR). 3. Final URL Expansion In a traditional campaign, the advertiser selects the landing page. With Final URL Expansion, Google’s AI decides which page on your website is the best fit for a specific query. If a user searches for a specific product feature that is buried deep in your blog or a sub-category page, AI Max can bypass your standard landing page and send the user directly to the most relevant content. The Performance Paradox: Google’s Claims vs. Real-World Results There is a notable discrepancy between Google’s official narrative and the independent data from the Smarter Ecommerce study. Google reports that advertisers who activate AI Max features typically see a 14% increase in conversions or conversion value at a similar CPA or ROAS. For campaigns still relying heavily on exact and phrase match keywords, Google claims that lift can jump as high as 27%. So, why the gap? One significant factor flagged by Mike Ryan is that Google’s 14% uplift statistic conspicuously excludes retail data. For e-commerce brands, this omission is a major red flag. Retail is often the most competitive and complex sector of search marketing, and the exclusion of this data suggest that AI Max may struggle more in product-led environments than in service-based lead generation. There is also a deeper irony in the adoption of these tools. Google suggests that the highest incremental benefits come from accounts that are still “old school” (using exact and phrase match). However, the advertisers most likely to adopt AI Max are the “early adopters” who are already using Broad Match and Performance Max. According to the data, these advanced accounts actually see the lowest incremental benefit because the AI is already doing much of the heavy lifting elsewhere. Four Critical Pitfalls Identified in the Study The shift to AI Max isn’t just about higher CPAs; it introduces several structural risks that can erode campaign health if left unchecked. The Smarter Ecommerce study highlighted four primary areas of concern. 1. Broad Match Cannibalization One of the most troubling findings was that AI Max often “recycles” existing traffic rather than finding new customers. The study found that up to 63% of the time, AI Max was simply bidding on queries that the advertiser’s existing keyword coverage would have already captured. Instead of providing true incrementality, the AI was often just shifting credit from one part of the account to another, sometimes at a higher cost. 2. Competitor Hijacking Automation tools like AI Max are designed to find conversions wherever they can, and often, the “low-hanging fruit” is competitor brand terms. In one analyzed account, AI Max scaled so aggressively into competitor brand names that it consumed 69% of the total Search impressions. While bidding on competitors can be a valid strategy, doing so unintentionally can lead to expensive bidding

Uncategorized

New finding: ChatGPT sources 83% of its carousel products from Google Shopping via shopping query fan-outs

The Evolution of AI Search and the Shopping Data Mystery For the past year, the tech world has watched closely as OpenAI has attempted to pivot ChatGPT from a conversational chatbot into a full-fledged search engine. With the rollout of integrated search features, the question on every SEO professional and digital marketer’s mind has been: where is this data coming from? While OpenAI’s historical partnership with Microsoft suggested a heavy reliance on Bing, recent technical investigations have uncovered a surprising reality. It appears that when it comes to e-commerce, ChatGPT is looking toward Mountain View, not Redmond. A comprehensive new study has revealed that ChatGPT sources a staggering 83% of its carousel products directly from Google Shopping. This discovery was made by analyzing “query fan-outs” (QFOs), the internal search queries ChatGPT generates to fetch real-time data. The findings suggest that despite OpenAI’s move toward independence, the platform has developed a significant, perhaps even systemic, reliance on Google’s product index to power its shopping recommendations. Understanding the Technical Link: The id_to_token_map Discovery The investigation into ChatGPT’s sourcing began in late 2025, when AI researchers identified a specific field within ChatGPT’s source code labeled id_to_token_map. While the field initially appeared to be a string of gibberish, it was actually base64 encoded. Once decoded, the data revealed a treasure trove of parameters that are synonymous with the Google Shopping ecosystem. Researchers found specific identifiers such as productid and offerid, alongside locale and language parameters. Most tellingly, the decoded field contained the exact query used to trigger the product lookup. By extracting these parameters, researchers were able to reconstruct full Google Shopping URLs. When these URLs were tested, they led directly to the same products displayed within the ChatGPT interface. This technical “smoking gun” proved that ChatGPT isn’t just “finding” products on the web through general crawling; it is actively querying Google’s structured shopping data to populate its interactive carousels. This raises vital questions about the architecture of AI search and how much of the “AI answer” is simply a re-ranking of existing search engine results. What Are Shopping Query Fan-Outs? To understand how ChatGPT retrieves information, we have to look at “query fan-outs.” When a user types a prompt like “best budget mechanical keyboards,” ChatGPT doesn’t just look at its training data. Instead, it “fans out” the request into multiple secondary search queries to find current web results. The study categorized these into two types: normal search fan-outs and shopping query fan-outs (QFOs). The data shows that these two processes are fundamentally different and operate on separate tracks. After analyzing 1.1 million shopping QFOs, researchers found that shopping fan-outs are unique to the user prompt 99.7% of the time. More importantly, they are distinct from the general search fan-outs 98.3% of the time. This suggests that ChatGPT knows when a user is in a “buying” mindset and switches to a specific retrieval pipeline designed for products. The Differences in Query Structure The study found a clear divergence in how these queries are constructed: Search Fan-Outs: These queries average 12 words in length. They are designed to be descriptive and contextual, aiming to retrieve web pages, articles, and reviews that can be used to synthesize a written response. Shopping Fan-Outs: These queries are much shorter, averaging only seven words. Their primary goal is to hit a specific shopping index and return a list of products. They act more like a traditional search bar entry than a conversational prompt. Furthermore, the frequency of these queries differs. On average, a single user prompt triggers 2.4 search fan-outs but only 1.16 shopping fan-outs. This indicates that while ChatGPT needs multiple sources to write a detailed answer, it only needs a single, efficient query to Google Shopping to fill a product carousel with eight items. The Data Breakdown: Google Shopping vs. Bing Shopping To quantify the extent of this reliance, the study compared 43,000 products found in ChatGPT carousels against 200,000 organic shopping results from both Google and Bing. The methodology involved choosing diverse prompts across 10 industry verticals and using a sophisticated matching algorithm to identify product overlaps. The Google Dominance The results were conclusive. Approximately 45.8% of ChatGPT carousel products had an exact title match within the top 40 organic results of Google Shopping. When the criteria were expanded to “strong matches” (products that are clearly the same brand and model but may have slight title variations), the number jumped to over 83%. The Bing Discrepancy In contrast, Bing’s influence on the shopping carousel was almost non-existent. Only 0.48% of products were an exact match for Bing’s top 40 results. While 11% of products showed some level of similarity to Bing results, nearly all of those products were also found on Google. In fact, across the entire dataset of 43,000 products, only 70 items (a negligible 0.16%) were found exclusively on Bing. This proves that ChatGPT is essentially ignoring Bing Shopping in favor of Google’s more robust index. The Impact of Positional Bias For retailers and e-commerce managers, one of the most critical findings of this study is the correlation between Google Shopping rank and ChatGPT carousel placement. The study found a clear “sloping trendline,” meaning that products ranking higher on Google are significantly more likely to appear—and appear earlier—in ChatGPT. Key statistics regarding positioning include: The Top 10 Rule: 60% of the strong product matches in ChatGPT come from the top 10 results in Google Shopping. The Top 20 Rule: Nearly 84% of matches come from the top 20 Google Shopping results. Carousel Ranking: The first position in a ChatGPT carousel typically corresponds to a product found in the top 5 of Google Shopping organic results. This suggests that ChatGPT is not just sourcing from Google; it is largely trusting Google’s existing ranking algorithm to determine which products are most relevant to the user. If you are not ranking on the first page of Google Shopping, your chances of appearing in a ChatGPT product recommendation are statistically slim. Does Prompt Branding Change the Results?

Scroll to Top