Author name: aftabkhannewemail@gmail.com

Uncategorized

How To Build An AI SEO Strategy That Outlasts Tactics via @sejournal, @Kevin_Indig

Understanding the Shift: Why Tactics Alone Fail in the AI Era The search engine optimization landscape is currently undergoing its most significant transformation since the introduction of the first ranking algorithms. With the integration of Large Language Models (LLMs) into search results through Google’s AI Overviews (formerly SGE), Bing Chat, and conversational engines like Perplexity, the old playbook is being rewritten. Many digital marketers are responding to this shift by scrambling for quick fixes—tactics like mass-producing AI content or attempting to “hack” the latest update. However, tactics are temporary. A strategy built solely on tactics is fragile and prone to collapse whenever a search engine updates its core algorithm. To succeed in the modern era, brands must move beyond a “tactic-first” mentality. An AI SEO strategy that outlasts tactics is one built on a foundation of data, user intent, and brand authority. It recognizes that while the tools for content creation and technical optimization have changed, the fundamental goal remains the same: providing the most valuable, authoritative, and accessible answer to a user’s problem. This guide explores how to build a durable AI SEO strategy that remains effective even as the underlying technology evolves. The Difference Between Tactics and Strategy in AI SEO Before diving into the framework, it is essential to distinguish between a tactic and a strategy. A tactic is a specific action taken to achieve a small, immediate goal. Examples include using an AI writing tool to generate meta descriptions or using a scraper to find keyword gaps. While useful, these actions are easily replicated by competitors and offer no long-term competitive advantage. A strategy, on the other hand, is a high-level plan that coordinates your resources to achieve a long-term vision. An AI SEO strategy focuses on how your brand will position itself within the AI-driven information ecosystem. It considers how LLMs crawl data, how they cite sources, and how human behavior changes when interacting with chat interfaces. A durable strategy focuses on building “moats”—unique advantages that AI cannot easily replicate, such as proprietary data, unique brand voice, and deep topical authority. Pillar 1: Answer Engine Optimization (AEO) and the Information Gain Model Search engines are no longer just lists of links; they are “Answer Engines.” This shift toward Answer Engine Optimization (AEO) requires a rethink of how content is structured. AI models are trained to synthesize information from multiple sources to provide a single, cohesive answer. To stay relevant, your content must be structured in a way that these models can easily parse and cite. Prioritizing Information Gain In a world where AI can summarize the top ten search results in seconds, “regurgitated” content has zero value. If your article says the same thing as every other article on the web, an AI model will summarize the consensus and likely omit a link to your site. To survive, you must provide “Information Gain.” This is the addition of new, unique information that isn’t found elsewhere. This could include original research, case studies, personal experience, or a contrarian viewpoint backed by data. Information gain is what makes your content “citeable” by an AI engine. Structuring for Fragmented Retrieval AI models often retrieve information in chunks rather than reading entire pages. A durable strategy involves optimizing these chunks. Use clear, descriptive subheadings, bulleted lists for technical specifications, and concise “TL;DR” summaries at the beginning of long-form pieces. By making your information modular, you increase the likelihood that an AI assistant will extract your specific data point or quote for its answer. Pillar 2: Technical SEO for a Machine-Learning World The technical side of SEO has evolved from simple indexing to ensuring “data readiness.” If search engines are the engines, data is the fuel. If your site’s data is messy, AI will struggle to interpret it correctly. The Role of Structured Data (Schema.org) Schema markup has never been more important. It serves as a translator between your human-readable content and the machine-readable requirements of LLMs. By using advanced schema—such as Product, Organization, Person, and FAQ—you provide explicit context that helps AI understand the relationships between different entities on your site. This reduces the “hallucination” risk for the AI and increases the chances of your brand being featured in rich snippets and AI-generated overviews. Managing Crawl Budgets for LLM Bots With the rise of bots like GPTBot, CCBot, and others, managing your crawl budget and permissions is a strategic necessity. A durable strategy involves making intentional decisions about which parts of your site should be accessible to AI crawlers. While blocking all AI bots might protect your intellectual property, it could also lead to your brand being invisible in conversational search results. A balanced strategy involves allowing access to high-value informational pages while protecting proprietary tools or sensitive data via robots.txt and advanced header tags. Pillar 3: Authority and E-E-A-T in the Age of Generative AI Google’s emphasis on Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T) is a direct response to the flood of low-quality AI content. A durable AI SEO strategy leans heavily into the “Experience” aspect. AI can synthesize information, but it cannot “experience” a product, a location, or a process. Showcasing First-Person Expertise To outlast tactics, your content must emphasize the human element. This means using phrases like “In our testing,” “Our team found,” or “Based on my 10 years in the industry.” Highlighting real people behind the content—complete with detailed author bios, links to social profiles, and a history of published work—creates a trust signal that AI-generated sites cannot replicate. This “human-in-the-loop” approach ensures that even if AI helps write the draft, the expertise is authentically human. Building a Brand Moat Brand searches are the most resilient form of traffic. If a user asks an AI, “What is the best CRM for small businesses?” the AI might list several options. If the user asks, “How do I set up a workflow in Salesforce?” they are already in an ecosystem. A strategy focused on brand building ensures that you are the destination, not just a source

Uncategorized

Why most video ads fail — and what video metrics actually matter

Why most video ads fail — and what video metrics actually matter Video advertising has entered an era of unprecedented accessibility. Today, a brand can launch a global campaign across YouTube, Instagram, TikTok, and Connected TV (CTV) with little more than a credit card and a high-speed internet connection. Platforms have perfected the art of distribution, delivering billions of impressions and views to nearly every demographic on the planet. For many marketers, the sheer scale of modern reach feels like a guaranteed win. However, there is a growing disconnect between distribution and effectiveness. While digital dashboards are glowing with green arrows indicating millions of views and high completion rates, those numbers often fail to translate into actual business results. We are seeing a paradox where campaigns generate massive platform engagement but produce almost no measurable impact on brand preference, search volume, or sales. The reality is that while it has never been easier to get a video seen, it has never been harder to get a video to matter. The failure of most video ads isn’t typically a failure of targeting or budget. It is a strategic failure rooted in a misunderstanding of what makes a viewer stop, listen, and remember. To fix the broken model of video advertising, we must move beyond vanity metrics and understand the nuanced relationship between creative execution and human psychology. Most video ads fail because they misunderstand attention The most common mistake in modern video advertising is treating digital platforms like traditional television. In the golden age of TV, the audience was essentially captured. If you were watching a show, you were likely to sit through the commercial break. Distribution was the primary hurdle; if you could afford the airtime, you had the audience’s attention by default. In the digital world, distribution is a commodity, but attention is the scarcest resource on earth. Today’s viewers are not a captive audience. Whether they are scrolling through a social feed, waiting for a YouTube video to start, or watching a streaming service, they arrive with specific intent and established habits. They are looking for entertainment, education, or connection—not your sales pitch. Every ad is an interruption of that intent. When we plan for reach, we are simply buying the right to interrupt. But when we plan for relevance, we are earning the right to stay. Many marketing meetings focus on “impressions delivered.” This is a dangerous trap. An impression is merely a technical confirmation that a file was served on a screen. It says nothing about whether a human being looked at it, processed the information, or felt an emotional response. When there is no connection between high views and downstream business metrics like search lift or site engagement, the campaign has failed to bridge the gap between “seen” and “absorbed.” The first five seconds are the entire negotiation The introduction of the “Skip” button changed the fundamental nature of advertising. It turned every ad into a high-stakes negotiation. If you haven’t given the viewer a reason to stay within the first few seconds, the negotiation is over, and the skip button is pressed. Yet, many advertisers still produce ads that bury the hook at the end of a long, cinematic buildup. Early in the digital transition, common wisdom suggested putting branding front and center. Marketers would open with a high-resolution logo, polished product shots, and professional music cues to signal brand authority. While these ads look impressive in a boardroom presentation, they often trigger a “reflexive skip” in the real world. As soon as a viewer sees a corporate logo or a traditional commercial aesthetic, their brain identifies it as “not what I came for” and begins looking for the exit. Successful video ads treat the first five seconds like a headline in a newspaper. You don’t lead with the author’s name; you lead with the story. The opening frame must present a recognizable problem, a provocative question, or an unexpected visual that disrupts the scroll. The goal is to create “cognitive friction”—something that makes the brain pause its autopilot mode to investigate what it’s seeing. In brand lift analyses, we often find that the majority of an ad’s impact occurs before the skip button even appears. If you don’t win the first five seconds, the remaining fifty-five seconds are irrelevant. High-performing ads often delay the hard branding in favor of a narrative hook, earning the viewer’s attention before revealing the messenger. Higher production value often correlates with lower performance One of the most jarring lessons for traditional creative directors is that “expensive” does not always mean “effective.” In fact, on platforms like TikTok, Reels, and YouTube, overly polished studio content frequently underperforms compared to scrappier, more authentic-looking video. This is because modern audiences have developed a “filter” for traditional advertising. When a video looks like it was made by a professional agency with a six-figure lighting budget, it immediately signals “advertisement.” Digital audiences crave authenticity. They respond to content that feels like it was created by a peer rather than a corporation. This is why phone-shot testimonials or simple, direct-to-camera explanations often drive higher engagement and conversion than cinematic masterpieces. The goal isn’t to look cheap or amateurish; the goal is to look native to the platform. An ad on TikTok should look like a TikTok. An ad on LinkedIn should respect the professional visual grammar of that feed. Algorithms reinforce this behavior. Social media algorithms prioritize watch time and retention. When a user sees a video that looks like an organic post from a friend or an influencer, they are more likely to watch the first few seconds. If the content is valuable, they stay. If it looks like a TV commercial that was simply resized for a phone, they swipe away instantly. Performance declines when brands try to “upgrade” their visual identity at the expense of platform-native authenticity. Designing for the sound-off environment A significant portion of mobile video is consumed without sound. If your ad relies entirely on a voiceover

Uncategorized

AI Max increases revenue 13% but drives higher CPA: Study

The Evolution of Search: Understanding the AI Max Shift For over two decades, Google Ads was a game of syntax. Digital marketers spent countless hours refining keyword lists, obsessing over match types, and sculpting negative keyword lists to ensure their ads appeared for the most relevant queries. However, we are currently witnessing the sunset of that era. Google is aggressively moving toward a future defined by intent rather than specific phrasing, and the spearhead of this movement is AI Max. AI Max represents more than just a minor feature update; it is a fundamental reimagining of how Search campaigns function. By integrating the automation logic found in Performance Max (PMax) directly into the core of Search, Google is attempting to bridge the gap between traditional keyword-based targeting and fully automated, intent-based bidding. But as a recent study reveals, this transition comes with significant financial implications that every advertiser must understand. The Data Speaks: Growth vs. Efficiency The core dilemma of AI Max is encapsulated in a recent analysis conducted by Mike Ryan of Smarter Ecommerce. After auditing more than 250 campaigns, the data paints a complex picture of what happens when advertisers hand the keys over to Google’s latest AI tool. The study found that while AI Max is undeniably effective at driving top-line growth, that growth often comes at a steep price. The median results from the analysis show a 13% increase in revenue for campaigns utilizing AI Max. For many brands, a double-digit jump in revenue is a clear victory. However, the efficiency metrics tell a different story. During the same period, the median Cost Per Acquisition (CPA) rose by 16%. When costs rise faster than revenue, profit margins naturally tighten, creating a situation where advertisers are essentially paying more to acquire the same—or slightly more—volume. Furthermore, the Return on Ad Spend (ROAS) showed a staggering range of volatility. In some successful implementations, ROAS improved by as much as 42%. In others, it plummeted by 35%. This variance suggests that AI Max is not a “set it and forget it” solution, but rather a high-stakes tool that requires careful monitoring and strategic deployment. What Exactly Is AI Max? To understand why these performance swings occur, we must look at what AI Max actually does. It isn’t a new campaign type in the way PMax was; instead, it is a suite of three core automated features designed to expand the reach of existing Search campaigns. 1. Search Term Matching This is perhaps the most significant change. AI Max pushes beyond traditional keyword syntax. It utilizes broad match expansion coupled with “keywordless” targeting. Essentially, Google’s algorithms analyze the content of your landing pages and the intent of a user’s search query to serve an ad, even if that query doesn’t contain a single keyword from your ad group. It focuses on the “why” behind the search rather than the “what.” 2. Text Customization AI Max takes dynamic search ads to the next level by automatically generating and testing ad copy. By analyzing what performs best for specific user segments, the system can customize headlines and descriptions in real-time. The goal is to maximize relevance for the individual user, theoretically increasing click-through rates (CTR). 3. Final URL Expansion In a traditional campaign, the advertiser selects the landing page. With Final URL Expansion, Google’s AI decides which page on your website is the best fit for a specific query. If a user searches for a specific product feature that is buried deep in your blog or a sub-category page, AI Max can bypass your standard landing page and send the user directly to the most relevant content. The Performance Paradox: Google’s Claims vs. Real-World Results There is a notable discrepancy between Google’s official narrative and the independent data from the Smarter Ecommerce study. Google reports that advertisers who activate AI Max features typically see a 14% increase in conversions or conversion value at a similar CPA or ROAS. For campaigns still relying heavily on exact and phrase match keywords, Google claims that lift can jump as high as 27%. So, why the gap? One significant factor flagged by Mike Ryan is that Google’s 14% uplift statistic conspicuously excludes retail data. For e-commerce brands, this omission is a major red flag. Retail is often the most competitive and complex sector of search marketing, and the exclusion of this data suggest that AI Max may struggle more in product-led environments than in service-based lead generation. There is also a deeper irony in the adoption of these tools. Google suggests that the highest incremental benefits come from accounts that are still “old school” (using exact and phrase match). However, the advertisers most likely to adopt AI Max are the “early adopters” who are already using Broad Match and Performance Max. According to the data, these advanced accounts actually see the lowest incremental benefit because the AI is already doing much of the heavy lifting elsewhere. Four Critical Pitfalls Identified in the Study The shift to AI Max isn’t just about higher CPAs; it introduces several structural risks that can erode campaign health if left unchecked. The Smarter Ecommerce study highlighted four primary areas of concern. 1. Broad Match Cannibalization One of the most troubling findings was that AI Max often “recycles” existing traffic rather than finding new customers. The study found that up to 63% of the time, AI Max was simply bidding on queries that the advertiser’s existing keyword coverage would have already captured. Instead of providing true incrementality, the AI was often just shifting credit from one part of the account to another, sometimes at a higher cost. 2. Competitor Hijacking Automation tools like AI Max are designed to find conversions wherever they can, and often, the “low-hanging fruit” is competitor brand terms. In one analyzed account, AI Max scaled so aggressively into competitor brand names that it consumed 69% of the total Search impressions. While bidding on competitors can be a valid strategy, doing so unintentionally can lead to expensive bidding

Uncategorized

New finding: ChatGPT sources 83% of its carousel products from Google Shopping via shopping query fan-outs

The Evolution of AI Search and the Shopping Data Mystery For the past year, the tech world has watched closely as OpenAI has attempted to pivot ChatGPT from a conversational chatbot into a full-fledged search engine. With the rollout of integrated search features, the question on every SEO professional and digital marketer’s mind has been: where is this data coming from? While OpenAI’s historical partnership with Microsoft suggested a heavy reliance on Bing, recent technical investigations have uncovered a surprising reality. It appears that when it comes to e-commerce, ChatGPT is looking toward Mountain View, not Redmond. A comprehensive new study has revealed that ChatGPT sources a staggering 83% of its carousel products directly from Google Shopping. This discovery was made by analyzing “query fan-outs” (QFOs), the internal search queries ChatGPT generates to fetch real-time data. The findings suggest that despite OpenAI’s move toward independence, the platform has developed a significant, perhaps even systemic, reliance on Google’s product index to power its shopping recommendations. Understanding the Technical Link: The id_to_token_map Discovery The investigation into ChatGPT’s sourcing began in late 2025, when AI researchers identified a specific field within ChatGPT’s source code labeled id_to_token_map. While the field initially appeared to be a string of gibberish, it was actually base64 encoded. Once decoded, the data revealed a treasure trove of parameters that are synonymous with the Google Shopping ecosystem. Researchers found specific identifiers such as productid and offerid, alongside locale and language parameters. Most tellingly, the decoded field contained the exact query used to trigger the product lookup. By extracting these parameters, researchers were able to reconstruct full Google Shopping URLs. When these URLs were tested, they led directly to the same products displayed within the ChatGPT interface. This technical “smoking gun” proved that ChatGPT isn’t just “finding” products on the web through general crawling; it is actively querying Google’s structured shopping data to populate its interactive carousels. This raises vital questions about the architecture of AI search and how much of the “AI answer” is simply a re-ranking of existing search engine results. What Are Shopping Query Fan-Outs? To understand how ChatGPT retrieves information, we have to look at “query fan-outs.” When a user types a prompt like “best budget mechanical keyboards,” ChatGPT doesn’t just look at its training data. Instead, it “fans out” the request into multiple secondary search queries to find current web results. The study categorized these into two types: normal search fan-outs and shopping query fan-outs (QFOs). The data shows that these two processes are fundamentally different and operate on separate tracks. After analyzing 1.1 million shopping QFOs, researchers found that shopping fan-outs are unique to the user prompt 99.7% of the time. More importantly, they are distinct from the general search fan-outs 98.3% of the time. This suggests that ChatGPT knows when a user is in a “buying” mindset and switches to a specific retrieval pipeline designed for products. The Differences in Query Structure The study found a clear divergence in how these queries are constructed: Search Fan-Outs: These queries average 12 words in length. They are designed to be descriptive and contextual, aiming to retrieve web pages, articles, and reviews that can be used to synthesize a written response. Shopping Fan-Outs: These queries are much shorter, averaging only seven words. Their primary goal is to hit a specific shopping index and return a list of products. They act more like a traditional search bar entry than a conversational prompt. Furthermore, the frequency of these queries differs. On average, a single user prompt triggers 2.4 search fan-outs but only 1.16 shopping fan-outs. This indicates that while ChatGPT needs multiple sources to write a detailed answer, it only needs a single, efficient query to Google Shopping to fill a product carousel with eight items. The Data Breakdown: Google Shopping vs. Bing Shopping To quantify the extent of this reliance, the study compared 43,000 products found in ChatGPT carousels against 200,000 organic shopping results from both Google and Bing. The methodology involved choosing diverse prompts across 10 industry verticals and using a sophisticated matching algorithm to identify product overlaps. The Google Dominance The results were conclusive. Approximately 45.8% of ChatGPT carousel products had an exact title match within the top 40 organic results of Google Shopping. When the criteria were expanded to “strong matches” (products that are clearly the same brand and model but may have slight title variations), the number jumped to over 83%. The Bing Discrepancy In contrast, Bing’s influence on the shopping carousel was almost non-existent. Only 0.48% of products were an exact match for Bing’s top 40 results. While 11% of products showed some level of similarity to Bing results, nearly all of those products were also found on Google. In fact, across the entire dataset of 43,000 products, only 70 items (a negligible 0.16%) were found exclusively on Bing. This proves that ChatGPT is essentially ignoring Bing Shopping in favor of Google’s more robust index. The Impact of Positional Bias For retailers and e-commerce managers, one of the most critical findings of this study is the correlation between Google Shopping rank and ChatGPT carousel placement. The study found a clear “sloping trendline,” meaning that products ranking higher on Google are significantly more likely to appear—and appear earlier—in ChatGPT. Key statistics regarding positioning include: The Top 10 Rule: 60% of the strong product matches in ChatGPT come from the top 10 results in Google Shopping. The Top 20 Rule: Nearly 84% of matches come from the top 20 Google Shopping results. Carousel Ranking: The first position in a ChatGPT carousel typically corresponds to a product found in the top 5 of Google Shopping organic results. This suggests that ChatGPT is not just sourcing from Google; it is largely trusting Google’s existing ranking algorithm to determine which products are most relevant to the user. If you are not ranking on the first page of Google Shopping, your chances of appearing in a ChatGPT product recommendation are statistically slim. Does Prompt Branding Change the Results?

Uncategorized

AI Max increases revenue 13% but drives higher CPA: Study

The Paradigm Shift: Understanding Google’s Move Toward AI Max The landscape of digital advertising is undergoing its most significant transformation since the introduction of quality scores and keyword bidding. Google’s latest evolution in the search ecosystem, known as AI Max, represents a fundamental shift away from the traditional mechanics of search marketing. For decades, advertisers have relied on the precise syntax of keywords to connect with potential customers. With AI Max, Google is steering the industry toward a future defined by intent-based matching and algorithmic automation. A comprehensive new study by Mike Ryan of Smarter Ecommerce, which analyzed data from over 250 campaigns, provides a sobering yet illuminating look at the reality of this transition. The findings suggest that while AI Max is a powerful engine for growth, it comes with a distinct set of economic trade-offs. Specifically, the study revealed a median revenue increase of 13%, but this growth was accompanied by a 16% rise in Cost Per Acquisition (CPA). This data highlights the central dilemma for modern marketers: how to scale reach without sacrificing the bottom-line efficiency that keeps a business profitable. What is AI Max? Bringing PMax-Style Automation to Search To understand the implications of the Smarter Ecommerce study, one must first understand what AI Max actually is. Rather than being a completely new campaign type that replaces existing structures, AI Max is better described as a suite of Performance Max (PMax) technologies integrated directly into classic Search campaigns. It represents Google’s effort to bridge the gap between the granular control of traditional search and the “black box” efficiency of fully automated systems. AI Max is built upon three core pillars that fundamentally change how an ad finds its way to a user: 1. Search Term Matching (Keywordless Targeting) This feature moves beyond broad match expansion. It allows Google’s algorithms to target queries based on user intent and landing page content, even if the advertiser hasn’t specified a particular keyword. It essentially treats the entire web and the user’s historical behavior as a signal, rather than relying on a static list of search terms. 2. Text Customization (Dynamic Ad Copy) AI Max leverages generative AI to craft ad copy in real-time. By analyzing the user’s specific query and the context of their search, the system dynamically adjusts headlines and descriptions to maximize relevance. While this can improve click-through rates (CTR), it also reduces the advertiser’s direct control over the brand voice and messaging specifics. 3. Final URL Expansion In a traditional setup, an advertiser sends traffic to a specific, hand-picked landing page. Final URL Expansion allows Google to redirect users to the most relevant page on a website based on the search query. While this helps capture long-tail traffic, it requires a highly optimized website structure to ensure the AI doesn’t send users to irrelevant or low-converting pages. Analyzing the Numbers: Revenue Growth vs. Efficiency Loss The Smarter Ecommerce study offers a data-driven reality check against Google’s more optimistic internal benchmarks. According to Mike Ryan’s analysis, the outcomes of adopting AI Max are far from uniform. The range of Return on Ad Spend (ROAS) was particularly volatile, swinging from a positive 42% uplift to a staggering 35% decrease. This volatility suggests that AI Max is not a “set it and forget it” solution; its success depends heavily on the existing account structure and the specific industry vertical. Google’s official stance is that advertisers who activate AI Max typically see an average of 14% more conversions or conversion value at a similar CPA or ROAS. For accounts that still rely heavily on exact and phrase match keywords, Google claims this uplift can jump as high as 27%. However, there is a significant discrepancy between these figures and the independent study. Ryan notes that Google’s 14% uplift statistic conspicuously excludes the retail sector—a massive omission considering that ecommerce often faces the tightest margins and most competitive bidding environments. The median 16% increase in CPA found in the study suggests that AI Max is currently “buying” growth. By expanding reach into less certain queries and using intent-based matching, the system finds new customers, but often at a higher cost than the highly refined, keyword-targeted traffic that veteran advertisers have spent years optimizing. The Four Critical Pitfalls of AI Max As advertisers begin to experiment with these features, the Smarter Ecommerce study identified four specific pitfalls that can drain budgets and compromise campaign integrity if left unmanaged. 1. Broad Match Cannibalization One of the most concerning findings was that up to 63% of the time, AI Max was simply recycling existing coverage rather than discovering new, incremental queries. Instead of finding “new” customers, the AI often bids on terms the advertiser was already ranking for through existing exact or phrase match keywords. This creates a situation where the advertiser is essentially paying more for the same traffic through an automated channel. 2. Competitor Brand Hijacking AI Max’s aggressive pursuit of intent can sometimes lead it into sensitive territory. The study highlighted one account where AI Max scaled so aggressively into competitor brand terms that it eventually consumed 69% of the total search impressions. While bidding on competitors can be a valid strategy, having an automated system do so without strict parameters can lead to “bidding wars” that rapidly inflate CPAs and damage professional relationships between competing brands. 3. The Reporting Overload Challenge The transparency that search marketers have long enjoyed is becoming harder to maintain. With AI Max, search term and ad combination reports can easily run into tens of thousands of rows. Auditing these reports manually has become nearly impossible. For many advertisers, this leads to a lack of oversight where wasteful spending can hide within thousands of low-volume, automated queries that collectively drain the budget. 4. Search Partner Network (SPN) Inefficiency The Search Partner Network has long been a point of contention for Google Ads users, and AI Max appears to exacerbate these issues. In one campaign analyzed by Ryan, half a million monthly impressions were funneled into

Uncategorized

New finding: ChatGPT sources 83% of its carousel products from Google Shopping via shopping query fan-outs

New finding: ChatGPT sources 83% of its carousel products from Google Shopping via shopping query fan-outs The landscape of artificial intelligence and search engine technology is shifting at a breakneck pace. For years, the industry assumption was that OpenAI’s partnership with Microsoft meant that ChatGPT would naturally lean on Bing for its real-time data needs. However, as OpenAI pursues greater independence and refines its search capabilities, a surprising new reality has emerged. A comprehensive study into ChatGPT’s product recommendation engine has revealed a staggering reliance on Google Shopping, rather than Microsoft’s own search infrastructure. New research indicates that approximately 83% of the products appearing in ChatGPT’s interactive shopping carousels are sourced directly from Google Shopping. This discovery was made by analyzing “query fan-outs” (QFOs)—the behind-the-scenes search queries the AI generates to fetch live data. The findings suggest that despite its corporate ties to Redmond, OpenAI’s “Search” functionality is deeply intertwined with the Mountain View ecosystem when it comes to e-commerce and product discovery. Understanding the Technical Framework: What is a Query Fan-Out? To understand how ChatGPT chooses which products to show you, we must first look at the mechanics of its retrieval-augmented generation (RAG) process. When you ask ChatGPT for the “best running shoes for flat feet,” the model doesn’t just rely on its training data. It generates specific sub-queries to browse the web for current pricing, availability, and reviews. These sub-queries are known in the research community as Query Fan-Outs (QFOs). In late 2025, researchers identified a hidden field within ChatGPT’s source code labeled id_to_token_map. When this field is decoded from its Base64 format, it reveals the specific parameters the AI uses to identify products. These parameters include specific identifiers such as productid and offerid, as well as locale and language settings. Most importantly, these parameters are identical to those used by Google Shopping’s internal indexing system. The Shopping QFO vs. The Search QFO The study found that ChatGPT treats product discovery as a fundamentally different task than general information gathering. There are two distinct types of fan-outs occurring simultaneously: Search Query Fan-Outs: These are longer, more descriptive queries (averaging 12 words) used to find blog posts, reviews, and articles. They are designed for vector search—comparing “chunks” of text to find the most relevant context for a written response. Shopping Query Fan-Outs: These are shorter (averaging 7 words) and highly targeted. Their sole purpose is to hit a structured shopping index to populate the visual carousel. The data shows that while a single prompt might trigger multiple search fan-outs to gather information, it usually triggers only one or two shopping fan-outs. This suggests that ChatGPT relies on a single authoritative source—Google Shopping—to fill its eight-product carousel in one go. Inside the Study: Measuring the Google vs. Bing Divide To prove that this wasn’t an anecdotal fluke, researchers utilized data from Peec AI to conduct a large-scale analysis. The study scrutinized over 43,000 products appearing in ChatGPT carousels across 10 major industry verticals. These included highly competitive categories like Electronics, Beauty & Personal Care, Home & Kitchen, and Apparel. The researchers then cross-referenced these ChatGPT results against the top 40 organic shopping results from both Google and Bing. To ensure accuracy, they excluded paid advertisements and sponsored listings, focusing entirely on organic rankings. The Matching Methodology Matching products across different platforms is notoriously difficult because titles are often rewritten or truncated. To solve this, a three-stage matching algorithm was used: Stage 1: Exact Match. A strict comparison of strings, ignoring case and whitespace. Stage 2: Near-Exact Match. Using a sequence matcher to account for minor differences in punctuation or special characters (like different types of dashes). Stage 3: Hybrid Match. A weighted average of character-level similarity (40%) and word overlap (60%). A “strong match” was defined as any product reaching a similarity score of 0.8 or higher. This threshold typically ensures that the brand and the specific product model are identical, even if the descriptive text varies slightly. The Findings: A Near Total Dominance for Google The results of the comparison were conclusive. Across the 43,000 products analyzed, 45.8% were an exact string match with Google’s organic shopping results. For Bing, that number plummeted to just 0.48%. When looking at “strong matches” (the 0.8 threshold), 83.3% of ChatGPT’s carousel products were found within the top 40 Google Shopping results. In contrast, Bing only shared 10.9% of the products featured in ChatGPT. More tellingly, of the few products Bing did match, nearly all of them were also present in the Google results. Only 0.16% of the products—a mere 70 items out of 43,000—were exclusive to Bing. This confirms that ChatGPT is almost certainly not using Bing as a primary or even secondary source for shopping data. The Influence of Rank: Positional Bias in the Carousel One of the most critical takeaways for e-commerce brands is the correlation between Google Shopping rank and ChatGPT carousel placement. The study found a clear “sloping trendline” that links the two. If a product ranks in the top five on Google Shopping, it is significantly more likely to appear in the first or second position of the ChatGPT carousel. The data revealed that 60% of all strong matches in the ChatGPT carousel were pulled from the top 10 results on Google. When expanding that to the top 20 Google results, the match rate rises to nearly 84%. This suggests that ChatGPT isn’t just picking random products from the web; it is effectively “cloning” the top of the Google Shopping organic index. If your product doesn’t rank on the first page of Google Shopping for a specific query, the chances of it appearing in a ChatGPT recommendation are statistically slim. Analyzing Performance Across Industry Verticals The study was designed to be robust, covering 10 different industries to ensure the behavior wasn’t limited to a specific niche. The findings remained consistent across the board, proving that this is a systemic architectural choice by OpenAI. Branded vs. Non-Branded Queries Researchers also looked at whether the type of prompt

Uncategorized

New finding: ChatGPT sources 83% of its carousel products from Google Shopping via shopping query fan-outs

In the rapidly evolving landscape of artificial intelligence and digital commerce, the question of where AI models derive their data has become a central focus for marketers, SEO professionals, and tech enthusiasts. For a long time, the industry assumption was that ChatGPT, through OpenAI’s close partnership with Microsoft, relied almost exclusively on Bing for its real-world data retrieval. However, a groundbreaking new study has revealed a startling shift in this dynamic, specifically regarding how ChatGPT handles e-commerce and product recommendations. Recent forensic analysis of ChatGPT’s source code and output behavior has uncovered a significant trend: OpenAI’s flagship chatbot is now sourcing approximately 83% of its carousel products directly from Google Shopping. This discovery, centered around a process known as “shopping query fan-outs,” suggests that despite its corporate ties to Microsoft, ChatGPT is increasingly leaning on Google’s massive shopping index to power its consumer-facing product recommendations. For brands and retailers, this finding is more than just a technical curiosity; it represents a fundamental shift in how “AI SEO” works. If your products aren’t ranking on Google Shopping, the chances of them appearing in a ChatGPT product carousel are now statistically slim. Let’s dive deep into the mechanics of this study, the data behind the findings, and what it means for the future of digital publishing and e-commerce. The Technical Smoking Gun: Decoding id_to_token_map The investigation into ChatGPT’s sourcing began in late 2025, when researchers identified a mysterious field within the platform’s source code labeled id_to_token_map. While this field initially appeared to be a string of nonsensical characters, it was actually base64 encoded. Upon decoding this data, researchers found a treasure trove of information that pointed directly to Google’s infrastructure. The decoded fields contained specific Google Shopping parameters, including productid, offerid, and various language or locale identifiers. Most importantly, the data revealed the exact query used by the AI to look up a specific product—a process known as a “shopping query fan-out” (QFO). To verify this connection, researchers attempted to reconstruct full Google Shopping URLs using only the parameters extracted from ChatGPT’s code. For example, when a user asked for the “best smartphones under $500,” ChatGPT generated a product carousel. By extracting the hidden parameters from that carousel, researchers were able to generate a link that led directly to the exact product page on Google Shopping. The match was not just a similarity; it was a precise architectural link, proving that ChatGPT wasn’t just “finding” these products on the web—it was actively querying Google’s specialized shopping index. Understanding Query Fan-Outs: Search vs. Shopping To understand why this is happening, we must first understand the concept of a “Query Fan-Out” (QFO). When you submit a prompt to an AI like ChatGPT, the model doesn’t just “know” the answer if the information is recent or specific. Instead, it generates several internal search queries—fan-outs—to gather data from the web before synthesizing a response. This is the core of Retrieval-Augmented Generation (RAG). The study analyzed over 1.1 million shopping QFOs to determine if they differed from standard search QFOs. The results were telling. Shopping QFOs were unique to the user prompt 99.7% of the time, meaning the AI creates a very specific “shopping-only” search path that is distinct from its general knowledge retrieval. Word Count and Intent There is a distinct difference in the complexity of these queries. General search QFOs, used to gather context for a written answer, averaged about 12 words in length. This makes sense, as contextual retrieval benefits from the nuances of vector search, which requires more linguistic detail to find relevant web pages. In contrast, shopping QFOs averaged only seven words. This brevity indicates a different objective. Rather than seeking a broad narrative or an article, the AI is targeting a structured index. It essentially acts as a “searcher” on Google Shopping, using concise keywords to trigger the most relevant product listings. The study suggests that for ChatGPT to populate an eight-product carousel, a single page of Google Shopping results is usually sufficient. Frequency of Queries The study also found that ChatGPT uses fewer queries for shopping than for general information. On average, a prompt triggers 2.4 search fan-outs but only 1.16 shopping fan-outs. This efficiency further supports the theory that ChatGPT is relying on the heavy lifting already performed by Google’s ranking algorithms. Instead of “shopping around” across multiple search engines, it goes to the most comprehensive source, retrieves the top results, and displays them. Google vs. Bing: The Battle for the Carousel The most striking aspect of this research is the disparity between Google and Bing. Given the multi-billion dollar partnership between OpenAI and Microsoft, one would expect Bing Shopping to be the primary source for these carousels. The data, however, tells a different story. Researchers analyzed 43,000 products across 5,000 ChatGPT carousels, comparing them against the top 40 organic results from both Google and Bing. The methodology involved a multi-stage matching algorithm to account for minor differences in product titles or formatting. The Findings Google Shopping Overlap: Over 83% of the products featured in ChatGPT carousels were found within the top 40 organic results on Google Shopping. Bing Shopping Overlap: Only 11% of the products appeared in Bing’s top 40 results. Exact Matches: 45.8% of ChatGPT’s product titles were an exact string match for Google Shopping titles. For Bing, the exact match rate was a negligible 0.48%. Exclusive Sourcing: Out of 43,000 products, only 70 (0.16%) were found exclusively on Bing. In almost every instance where a product appeared on Bing, it was also present—and usually ranked higher—on Google. These numbers indicate that ChatGPT’s product retrieval system is almost entirely dependent on Google’s organic shopping index. While it may still use Bing for general web context (the text-based portions of the answer), the “visual” commerce portion of the experience is powered by Google. Positional Bias: Why Ranking Still Matters For years, SEOs have lived by the mantra that “the best place to hide a dead body is the second page of Google.” It appears this rule applies

Uncategorized

200+ AI audits reveal why some industries struggle in AI search

The Changing Landscape of Digital Discovery For more than two decades, the relationship between content creators and search engines was governed by a predictable, symbiotic trade. Publishers created high-quality content designed to satisfy user intent, search engines indexed that content and ranked it, and users clicked through to the publisher’s website. This flow created an ecosystem where traffic could be converted into revenue through advertising, affiliate links, lead generation, or direct product sales. Today, that fundamental contract is being rewritten. The rise of zero-click searches and the rapid integration of Artificial Intelligence (AI) into search results—via platforms like Google’s AI Overviews, SearchGPT, and Perplexity—has introduced a new intermediary. The question is no longer just “Will I rank in the top three?” but rather “Will the AI cite me as a source?” and “If it does, will the user still need to visit my site?” To understand the mechanics of this shift, a comprehensive study involving over 200 AI visibility audits across 10 major industries was conducted. The results provide a startling look at who is winning the AI search war, who is losing, and why the industries that rely most heavily on search traffic are often the ones making themselves the hardest for AI to find. The Methodology: Measuring AI Visibility The audit was conducted using a standardized rubric to ensure consistency across different sectors. A total of 201 audits were performed, assessing each site’s performance based on an overall AI visibility score and four critical subscores: Freshness: How recently the content was updated and whether that update is machine-readable. Structure: The technical organization of the data, including HTML hierarchy and schema usage. Authority and Evidence: The presence of verifiable facts, outbound citations, and expertise signals that justify an AI’s decision to cite the source. Extractability: The ease with which an AI agent can crawl, parse, and “understand” the core content of a page. The dataset spanned 10 specific industries, including coupons, affiliate reviews, travel booking, local directories, personal finance, health information, legal directories, online courses, job boards, and recipes. While the sample included a variety of page types, it was intentionally homepage-heavy (131 homepages versus 13 articles). This distinction is vital because homepages are traditionally designed for human conversion and marketing, often lacking the dense, evidence-based content that AI systems prioritize for citations. Industry Performance: Winners and Losers in AI Search The data revealed a clear hierarchy in how different industries are handled by AI search models. Some industries are positioned well for the transition, while others are at extreme risk of vanishing from the digital conversation entirely. Below is the breakdown of industry performance, ranked by their median overall scores and “at risk” status. Rank Industry Error Rate Median Overall Median Authority Median Extractability At Risk 1 Travel booking and trip planning 33.3% 45.5 31.0 52.0 High 2 Job boards and career marketplaces 40.0% 64.0 44.0 74.0 High 3 Legal directories and lead gen 35.0% 63.0 44.0 74.0 High 4 Coupons and deals 20.0% 62.0 36.0 74.0 High 5 Local directories and lead gen 5.3% 64.0 38.0 74.0 Medium 6 Online courses and learning marketplaces 30.0% 67.5 46.5 80.0 Medium 7 Health info and symptom lookups 15.0% 69.0 52.0 80.0 Low 8 Personal finance comparison 5.0% 67.0 52.0 78.0 Low 9 Affiliate product reviews 0.0% 69.5 54.0 74.0 Low 10 Recipes and cooking content 5.0% 75.0 55.5 81.5 Low The rankings show that the most technical and data-driven industries, such as recipes and health information, are currently the best-prepared for AI search. Conversely, industries like travel and job boards are struggling with massive error rates and low authority scores. The Technical Barrier: Access Failures and “AI-Dark” Industries The most immediate and surprising takeaway from the 200+ audits is the prevalence of access failures. Nearly 19% of the sites audited returned an error, meaning the AI agent was either blocked by the site’s security protocols or could not process the page due to technical limitations. In certain industries, this problem is systemic. Job boards (40% error rate), legal directories (35%), and travel booking sites (33.3%) are effectively “AI-dark.” If an AI cannot reach the content, it cannot include the brand in its generated response. Instead, the model will either hallucinate, use a competitor’s data, or provide a generic answer that bypasses the industry leaders entirely. Common Causes of Access Failure Why are so many high-traffic sites invisible to AI? The audits highlighted three primary technical roadblocks: First, many enterprises employ aggressive bot protections, rate limiting, and Web Application Firewalls (WAFs). While these tools are essential for preventing malicious scraping and DDoS attacks, they often fail to distinguish between a harmful bot and a legitimate AI search agent. By treating these agents as hostile, brands are essentially opting out of the next generation of search visibility. Second, the rise of modern web development has led to “app-style” rendering. Many sites rely heavily on JavaScript to load content. If the core information does not arrive in the initial HTML and the AI agent does not wait for the script to execute, the site appears empty. This results in a “0” score for extractability, even if the site looks beautiful to a human user. Third, content gating and intrusive UI elements—such as popups, forced logins, or script-heavy overlays—can prevent an AI from cleanly resolving the page. When an agent encounters these barriers, it often abandons the attempt, leading to a loss of citation opportunities. The Content Gap: Trust and Authority in the Age of AI Even when an AI can successfully access and parse a website, it doesn’t always choose to cite it. This is where “Trust Failure” occurs. Across the 163 successfully processed audits, the median overall score was 66, placing the vast majority of sites in the “Inconsistent Visibility” category. The gap is not a matter of formatting; it’s a matter of proof. Most websites have mastered the art of technical SEO (the median structure score was a high 92), but they fail on the metrics that AI models

Uncategorized

How to chunk content and when it’s worth it

Introduction to Content Chunking in the AI Era In the rapidly evolving landscape of search engine optimization and digital publishing, the way we structure information has become just as critical as the information itself. As we move deeper into an era defined by Large Language Models (LLMs), Generative AI, and passage-based indexing, the traditional “wall of text” is no longer just a deterrent for human readers—it is a technical barrier for search algorithms. This has brought a technique known as “content chunking” to the forefront of SEO strategy. Content chunking is the practice of breaking down long-form information into smaller, self-contained, and manageable “chunks.” While the concept originated in cognitive psychology to describe how humans process memory, its application in digital marketing has become a cornerstone for visibility in AI-driven search environments. However, the technique is not without its controversies. Recent discussions within the SEO community, including insights from Google, suggest that over-optimizing for “bite-sized” content might actually strip away the depth and nuance that readers crave. The challenge for modern creators is to find the equilibrium between structured, retrievable data for AI and rich, engaging narratives for humans. Understanding how to chunk content effectively, and knowing precisely when it is worth the effort, is essential for anyone looking to maintain a competitive edge in search rankings and user engagement. What is chunking? At its core, chunking is the organizational process of dividing text into distinct, modular units of meaning. In a well-chunked article, each section serves a specific purpose and focuses on a singular idea. Unlike traditional academic writing, where paragraphs can span half a page and cover multiple sub-points, a chunked paragraph is laser-focused. The primary goal of chunking is to ensure that a reader—or an AI crawler—can extract the core message of a passage without needing to read the entire surrounding context. This does not mean the information is “dumbed down.” Rather, it is distilled. A single chunk should contain enough information to stand on its own as a complete thought, typically introduced by a descriptive heading and followed by a concise explanation or set of data points. When content is chunked correctly, it respects the “cognitive load” of the reader. Cognitive load refers to the amount of mental effort being used in the working memory. By segmenting information, you allow the reader to “reset” their focus with every new heading, making it easier for them to retain complex information without feeling overwhelmed by a dense block of text. Does chunking help AI or people? The debate surrounding chunking often pits AI optimization against human readability. Some argue that writing in chunks is a form of “gaming the system” for AI models like GPT-4 or Google’s Gemini. However, the reality is that the benefits of chunking are universal. What makes content easy for an AI to parse often makes it significantly easier for a human to scan and understand. The AI Perspective: Retrieval-Augmented Generation (RAG) To understand why AI loves chunked content, we must look at how modern AI search engines operate. Systems like Google’s AI Overviews or Perplexity use a process called Retrieval-Augmented Generation (RAG). When a user asks a question, the AI doesn’t just “remember” everything it learned during training; it actively searches the web for relevant passages. AI systems operate at the passage level. If you have a 3,000-word article about digital marketing, the AI isn’t going to cite the whole page. It looks for the specific 100-word “chunk” that answers the user’s specific query. If that answer is buried in a long, meandering paragraph that touches on three different topics, the AI may struggle to identify the definitive answer. By providing clear, focused chunks, you increase the “retrievability” of your content, making it much more likely to be featured as a source in AI-generated answers. The Human Perspective: The Scanning Culture From a human standpoint, the way we consume content online has changed. Most users do not read articles from start to finish. Instead, they scan in an “F-shaped” pattern, looking for headers, bullet points, and short paragraphs that satisfy their immediate information needs. Chunking caters directly to this behavior. When content is organized into units of meaning, it facilitates “nonlinear” reading. A user looking for a specific step in a tutorial can skip directly to the relevant chunk without being forced to wade through introductory fluff. This improves the overall user experience, reduces frustration, and can lead to higher dwell times as users find exactly what they need quickly. When to chunk content While chunking is a powerful tool, it is not a universal solution. Applying a rigid chunking structure to every single piece of content on your site can actually backfire, leading to a fragmented user experience that lacks soul or narrative flow. Deciding when to invest the time into chunking requires a strategic evaluation of your content’s purpose. Prioritize Chunking for Information-Heavy Pages The best candidates for chunking are pages that serve as educational or functional resources. If your goal is to provide specific answers to specific questions, chunking is non-negotiable. You should focus your efforts on: Technical Guides and Documentation: These require precise, step-by-step instructions where each phase of the process is its own discrete unit. Bottom-of-Funnel (BOF) Content: When users are comparing products or looking for pricing details, they want facts, not a story. Chunking helps them find the data they need to make a decision. Complex Industry Topics: If you are explaining a dense concept—like “keyword cannibalization” or “quantum computing”—breaking it into chunks prevents the reader from becoming lost in the jargon. High-Traffic, Low-Engagement Pages: If your analytics show that a page gets thousands of hits but users bounce within 15 seconds, it’s likely that the information is there, but it’s too hard to find. Chunking can rescue these pages. When to Avoid Rigid Chunking There are instances where chunking can actively harm the quality of the writing. If your content relies on an emotional arc, a building argument, or a specific prose rhythm, breaking it

Uncategorized

How the DOM affects crawling, rendering, and indexing

In the early days of search engine optimization, the process was relatively straightforward: you looked at the source code of a page, ensured your keywords were in the right places, and made sure your server was sending the right HTML. However, as the web has evolved from static documents into complex, interactive applications, the Document Object Model (DOM) has become the central pillar of technical SEO. Understanding how the DOM affects crawling, rendering, and indexing is no longer just for developers—it is a mandatory skill for any SEO professional working on modern websites. The transition from “View Source” SEO to “Rendered DOM” SEO represents one of the most significant shifts in how search engines perceive the internet. Today, Google and other sophisticated crawlers do not just read your code; they execute it. They build a living representation of your site in their memory, and it is this representation—the DOM—that ultimately determines your rankings. If your DOM is messy, bloated, or hides critical information behind user interactions, your search visibility will suffer, regardless of how good your content is. What Exactly is the Document Object Model (DOM)? The Document Object Model (DOM) is a programming interface for web documents. It represents the page so that programs can change the document structure, style, and content. When a browser loads a webpage, it takes the raw HTML and transforms it into an object-oriented representation. This is the DOM. Think of the HTML file sent by your server as a blueprint. While the blueprint is important, you cannot live in it. The DOM is the actual house built from that blueprint. It is a live, in-memory structure that exists within the browser. This distinction is critical because JavaScript can change the house after it is built—moving walls, adding windows, or changing the color of the paint—without ever changing the original blueprint (the HTML source code). The DOM is organized as a hierarchical tree structure, often referred to as the “DOM Tree.” At the very top is the Document object, which acts as the root. From there, the tree branches out into Elements (HTML tags like <body>, <header>, <div>, and <p>). These elements are known as “nodes.” These nodes have relationships with one another: Parents: An element that contains other elements (e.g., a <ul> is the parent of <li>). Children: Elements contained within another (e.g., <li> is the child of <ul>). Siblings: Elements that share the same parent. This hierarchy allows search engines to understand context. For instance, a heading followed by three paragraphs tells a crawler that those paragraphs are related to that specific heading’s topic. How to Inspect the DOM Like a Pro Many SEO beginners make the mistake of relying solely on “View Page Source” (Ctrl+U). While viewing the source shows you what the server sent to the browser, it does not show you what the browser actually did with that information. To see the DOM, you must use the Inspect tool in your browser’s Developer Tools (F12 or Right-Click > Inspect). The Elements panel in DevTools displays the current state of the DOM. Unlike the static source code, the Elements panel is dynamic. If a JavaScript script runs and injects a new call-to-action button or a list of related articles five seconds after the page loads, you will see it in the Elements panel, but you will never see it in the “View Source” view. When auditing the DOM, SEOs should look for: Dynamic Content: Content that only appears after the page has finished loading. Modified Attributes: Changes to canonical tags, meta robots tags, or alt text driven by JavaScript. Layout Stability: Elements that shift or change size, which can be tracked in the “Event Listeners” or “Performance” tabs within DevTools. It is important to remember that what you see in your browser may still differ from what Googlebot sees. Googlebot uses a specific version of the Chromium rendering engine, and it may not wait as long for scripts to execute as a human user would. The Construction Process: How the DOM is Built Understanding the “Critical Rendering Path” is essential for optimizing the DOM for SEO. The process of turning a string of HTML into a rendered webpage involves several distinct steps: 1. Building the DOM Tree As the browser receives HTML data from the server, it begins the process of “Tokenization.” It breaks down the code into tokens (e.g., StartTag: html, StartTag: body). These tokens are then converted into nodes. The browser builds the tree structure by nesting these nodes based on the tags’ hierarchy. 2. The CSSOM (CSS Object Model) While the DOM is being built, the browser also encounters <link> tags or <style> blocks. It must process these to create the CSSOM. The CSSOM is similar to the DOM but focuses on the styles applied to the elements. The browser cannot render the page until it has both the DOM and the CSSOM ready, which is why CSS is considered a “render-blocking” resource. 3. JavaScript Execution This is where things get complicated for SEO. When the browser hits a <script> tag, it typically pauses the construction of the DOM to fetch and execute the script. Scripts have the power to “mutate” the DOM. They can add, delete, or modify nodes. This is why a page’s final DOM often looks radically different from its initial HTML. From an SEO perspective, if your content is added by a script that takes too long to run, a search engine might “give up” and index a blank or incomplete page. 4. The Render Tree Once the DOM and CSSOM are combined, the browser creates the Render Tree. This tree only contains the elements required to render the page (it excludes hidden elements like <script> or <meta> tags, or elements with display: none). Finally, the browser performs “Layout” (calculating the geometry of each element) and “Paint” (filling in the pixels on the screen). Why the DOM is the Heart of Modern SEO In the past, Googlebot was a simple text-based crawler.

Scroll to Top