Author name: aftabkhannewemail@gmail.com

Uncategorized

Multi-location SEO strategy: Stop competing with your own content

Multi-location SEO strategy: Stop competing with your own content In the digital marketing landscape, multi-location brands often operate under a dangerous assumption: that more content across more pages automatically translates to higher search engine rankings. While this “carpet-bombing” approach to content might seem like a logical way to capture local markets, it frequently results in a phenomenon known as internal competition. Instead of outranking their competitors, many large-scale franchises and businesses with multiple branches find themselves inadvertently battling their own web pages for dominance in the Search Engine Results Pages (SERPs). Investing heavily in content is a hallmark of a healthy SEO budget, but without a unified strategy, that investment can actually dilute your brand’s authority. When every individual location page or local blog covers the exact same topics with the same keywords and search intent, search engines like Google struggle to determine which page is the most relevant. The result? A fragmented digital presence where authority is spread too thin, crawl budgets are wasted, and potential customers are left confused. To win in 2026 and beyond, brands must move away from repetitive volume and toward a sophisticated, tiered content strategy that distinguishes between corporate authority and local relevance. Where the strategy breaks down The breakdown of a multi-location SEO strategy is rarely a deliberate choice. More often, it is a byproduct of rapid scaling or a lack of centralized marketing governance. In many organizations, there is a natural tension between the corporate marketing team and local franchisees or branch managers. Corporate teams are focused on the “big picture”—building national brand awareness and high-level domain authority. Conversely, local teams are boots-on-the-ground; they want content that addresses their specific community’s needs and keeps users on their specific sub-pages. When these two forces act independently, the “too many cooks in the kitchen” syndrome takes over. Local branches may start their own blogs to capture local search intent, often mimicking the exact educational topics already covered on the main corporate site. Without a clear framework for who “owns” specific keywords, the website begins to cannibalize itself. Instead of having one authoritative page about “How to maintain an HVAC system” that ranks nationally and funnels users to local branches, a company might end up with 50 mediocre pages on the same topic, none of which have enough link equity or unique value to rank on page one. What type of content belongs at corporate The key to a successful multi-location strategy lies in the division of labor. Corporate content should act as the “North Star” for the brand, housing the comprehensive, evergreen, and educational resources that establish the organization as a leader in its industry. This centralization is essential for building domain-wide authority and ensuring that search engines view the brand as a primary source of information. Educational and informational pillars If a user is searching for “benefits of routine dental cleanings” or “how to choose the right homeowner’s insurance,” they are looking for information that remains consistent regardless of their geographic location. These broad, informational queries should be owned by the corporate blog. By consolidating this content into a single, high-quality URL, the brand can aggregate all its backlink power and social signals onto one page. This makes it much easier to rank for competitive, high-volume keywords than if that authority were split across dozens of local subfolders. Core service and product descriptions While local branches provide the service, the definition of that service usually comes from the top. Core product pages and service lines should be centralized. This ensures brand consistency and prevents the creation of near-duplicate pages that offer no unique local value. While a location page can—and should—link to these core service pages, they do not need to rewrite the entire technical specification of a product for every city in which they operate. Brand identity and mission Content regarding the company’s history, its leadership team, mission statements, and core values should live at the corporate level. These are the trust signals that reinforce E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness). Standardizing this information across the organization ensures that the brand’s message is never diluted or misrepresented at the local level. What type of content belongs at the local level If corporate owns the “What” and the “Why,” the local level must own the “Where” and the “Who.” Local content is about relevance, conversion, and community connection. This is where the brand proves it isn’t just a faceless national entity, but a local partner that understands the specific needs of its customers in a particular city or region. Geo-specific landing pages Every location needs a dedicated landing page that is more than just a placeholder with an address and phone number. To stand out, these pages require unique copy that reflects the local market. This includes localized metadata (Title tags and Meta descriptions that include the city name) and relevant structured data. Using LocalBusiness schema, including reviews and geo-coordinates, helps Google’s AI understand exactly where the business operates and how it relates to local search queries. Building unique local value To avoid being flagged as duplicate content, location pages should focus on elements that are truly unique to that branch. These include: Local Reviews and Testimonials: Displaying reviews from customers in that specific city provides social proof that resonates with local searchers. Team Bios and Photos: Introducing the actual staff members at a specific location builds immediate trust and differentiates the branch from a generic corporate entity. Community Involvement: Content about local event sponsorships, charity partnerships, or awards won in that specific region adds a layer of authenticity that cannot be replicated at the corporate level. Location-Specific Imagery: High-quality photos of the actual storefront, the local team, and the surrounding area help users and search engines confirm the location’s legitimacy. Whether these elements live on a single robust location page or within a “microsite” structure (where each location has its own subfolder and nested pages), the goal remains the same: strengthen local relevance to drive conversions. Common SEO risks of a

Uncategorized

You’re Not Scaling Content. You’re Scaling Disappointment

The Illusion of Growth in the Age of Mass Production In the current digital landscape, the pressure to produce content at an industrial scale has never been higher. Marketing departments and SEO agencies often find themselves locked in a relentless arms race, fueled by the belief that a higher volume of pages inevitably leads to a larger share of the market. This philosophy, often referred to as the “volume playbook,” suggests that if you can dominate a keyword set by sheer mass, you can force your way into search engine dominance. However, as industry veterans like Pedro Dias have pointed out, this strategy is frequently a house of cards. The reality is that many organizations are not actually scaling their influence, their brand, or their revenue. Instead, they are scaling disappointment. They are investing thousands of hours and significant capital into a content engine that produces diminishing returns, creates technical debt, and ultimately alienates the very audience it was intended to capture. To understand why the “publish more pages” strategy so often results in failure, we must examine the fundamental disconnect between search engine algorithms and the industrialization of content creation. The Recurring Cycle of the Volume Playbook The history of search engine optimization is littered with the remains of content strategies that prioritized quantity over quality. From the early days of keyword stuffing and link farms to the mid-2010s era of content “mills,” the cycle remains remarkably consistent. It begins with a loophole or an observation that certain types of thin content are ranking well. This leads to a frantic rush to replicate that success at scale. Initially, the results may look promising. A surge in indexed pages often leads to a temporary spike in impressions and clicks. Stakeholders celebrate the “hockey stick” growth on their analytics dashboards. However, this success is almost always short-lived. Google and other search engines are designed to provide the best possible answer to a user’s query. When a site begins to flood the index with low-value, repetitive, or derivative content, it triggers a series of algorithmic checks designed to maintain the integrity of the search results. Eventually, an update occurs—be it a core update or a specific helpful content adjustment—and the site’s traffic collapses. The disappointment sets in, followed by a period of panic, a pivot to a “new” strategy that is often just a variation of the old one, and the cycle begins anew. This cycle persists because it is easier to measure “number of articles published” than it is to measure “true audience value.” The AI Catalyst: Accelerating the Race to the Bottom The advent of generative AI has acted as an accelerant for this cycle of disappointment. Tools that can generate thousands of words in seconds have lowered the barrier to entry for content production to near zero. While AI is a powerful tool for research and structural assistance, its misuse has led to a “gray goo” of content—vast expanses of text that are grammatically correct but fundamentally empty of new insights, unique perspectives, or genuine expertise. When organizations use AI to scale content without human oversight or editorial standards, they are effectively automating their own irrelevance. Search engines have become increasingly sophisticated at identifying “LLM-style” writing that lacks the E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness) required for high rankings. By using AI to simply rephrase existing information found on the web, brands are contributing to a recursive loop of content that offers no incremental value to the user. This is not scaling content; it is scaling noise. The Danger of Information Dilution One of the most significant risks of mass-producing SEO content is the dilution of a website’s overall authority. Every page on a website carries a certain weight in the eyes of a search engine. When a site is bloated with thousands of thin, low-performing pages, it creates “index bloat.” This forces search engine crawlers to waste their “crawl budget” on low-quality pages rather than discovering and indexing the truly valuable insights hidden within the site. Furthermore, internal link structures become muddled. When you have twenty different articles targeting slightly different variations of the same keyword, you are effectively competing against yourself. This internal cannibalization confuses search engines and makes it difficult for them to determine which page is the definitive authority on a topic. Instead of having one powerhouse page that ranks in the top three results, you end up with twenty pages languishing on page five of the search results. Understanding the Difference Between Scale and Growth True scaling in content marketing involves increasing the impact of your message without a linear increase in resources or a decrease in quality. Growth, on the other hand, should be measured by the depth of engagement and the conversion of readers into loyal advocates. The “publish more” playbook confuses activity with progress. Consider the following distinctions between scaling disappointment and scaling value: Scaling Disappointment: Focuses on output metrics (number of posts, word counts, keyword density). Scaling Value: Focuses on outcome metrics (time on page, return visitor rate, assisted conversions, brand sentiment). Scaling Disappointment: Rehashes existing top-ranking content to “match” what is already there. Scaling Value: Introduces original research, case studies, and contrarian viewpoints that add to the conversation. Scaling Disappointment: Relies on automated templates and generic AI prompts. Scaling Value: Leverages Subject Matter Experts (SMEs) to provide depth that AI cannot replicate. The Psychological Trap of the “More is Better” Mindset Why do experienced marketers continue to fall for the volume trap? Much of it is rooted in corporate psychology. In many organizations, SEO is treated as a commodity rather than a strategic asset. Executives often want to see tangible evidence of work, and a spreadsheet showing 500 new URLs is a more “tangible” deliverable than a report explaining why three high-quality white papers took three months to produce. This creates a misaligned incentive structure. Agencies are incentivized to bill for “deliverables,” and internal teams are incentivized to meet “content quotas.” Neither of these incentives is tied to

Uncategorized

Your SEO maturity score doesn’t measure what you think it does

Understanding the True Nature of SEO Maturity In the high-stakes world of digital marketing, we often fixate on metrics that provide immediate gratification. We track keyword rankings, organic traffic growth, and backlink profiles with religious fervor. However, when an organization decides to measure its “SEO maturity,” there is a common and dangerous misconception about what that score actually represents. Many stakeholders believe a high maturity score is a reflection of technical perfection or content volume. In reality, your SEO maturity score doesn’t measure what you think it does. Most SEO programs operate in a state of precarious success. They rely on the brilliance of a few individuals rather than the strength of the organization’s infrastructure. The Visibility Governance Maturity Model (VGMM) was designed to address this specific gap. It isn’t an audit of your H1 tags or your site speed; it is an assessment of clear ownership, documented processes, and the decision rights that prevent your hard work from being accidentally dismantled by other departments. If your SEO strategy relies on a “hero” to save the day whenever an algorithm update hits, your organization isn’t mature—it’s lucky. True maturity is about sustainability, and the VGMM is the diagnostic tool that reveals whether your success is built on a foundation of granite or a house of cards. What VGMM Questions Are Designed to Reveal To understand the score, you must first understand the questions. A VGMM assessment doesn’t ask practitioners if they know how to optimize a page. Instead, these questions are directed at managers and the C-suite—the individuals who are responsible for the resources and governance of the brand’s digital presence. This is a critical distinction. The SEO practitioner usually knows exactly what needs to be done. They understand the nuances of schema markup, the importance of internal linking, and the complexities of crawl budget. But the VGMM isn’t testing individual knowledge; it is testing institutional knowledge. It diagnoses organizations where SEO expertise lives exclusively in the heads of employees rather than in documented, governed processes. If an organization’s SEO strategy walks out the door when a senior manager takes a new job, that organization has a maturity problem. Governance gaps typically manifest in the responses of management. When senior leaders are asked about the SEO process, the warning signs are often phrases like: “I don’t actually know the answer to that.” “You’d have to ask Sarah; she handles all the technical stuff.” “We had a process for that last year, but I’m not sure if anyone is still following it.” “Every regional team handles their own optimization differently.” “I think that documentation exists somewhere in the shared drive, but I haven’t seen it.” When leadership cannot answer basic questions about governance, it is a clear signal that SEO processes are not institutionalized. The organization is operating in a reactive state, vulnerable to personnel changes and departmental silos. The SPOF Reality Check: Why You Might Be a Liability One of the most sobering aspects of the VGMM is the identification of a Single Point of Failure (SPOF). In many organizations, the most talented SEO practitioner is also the company’s greatest risk. If you are the person who knows where all the “bodies are buried”—the one who understands the weird redirects from 2018, the logic behind the canonical tags, and exactly what will break if the dev team pushes a specific update—you are a SPOF. While this might feel like ultimate job security, it is actually what governance experts call a “job prison.” You cannot take a vacation without checking your email. You cannot be promoted without leaving a vacuum that could collapse the department. More importantly, from a maturity standpoint, a SPOF acts as a hard ceiling. An organization cannot move past Level 2 maturity as long as a Single Point of Failure exists. When the VGMM identifies you as an SPOF, it changes the conversation with leadership. Instead of you begging for more help, the data shows leadership that the current setup is a business risk. This realization leads to several positive outcomes: Resource Allocation: Leadership realizes that your knowledge must be codified into documentation. Training Budgets: Approval is granted to train others, spreading the expertise across the team. Institutional Continuity: Your expertise becomes a part of the company’s intellectual property, not just a personal skill set. Better Work-Life Balance: You can finally step away from the office knowing that the systems you built are governed by process, not just your presence. How Domain Scores Become a VGMM Score The VGMM is not a single, monolithic test. It is composed of various domain models, such as the SEO Governance Maturity Model (SEOGMM), Content Governance Maturity Model (CGMM), and Website Performance Maturity Model (WPMM). Each of these contributes to a holistic view of the company’s digital health. The process of arriving at a final score involves five distinct steps. Step 1: Domain Assessment Each domain utilizes a bank of 30 to 60 governance questions. These are strictly behavior-based. An opinion-based question might ask, “Do you think SEO is important for our growth?” (To which everyone says yes). A behavior-based question asks, “Are the SEO standards for new product launches documented and signed off by the Product Lead?” (A question that requires proof of a process). Step 2: Weighted Scoring Not all governance failures carry the same weight. A minor documentation gap in a low-traffic section of the site is weighted differently than a lack of ownership over critical technical decisions. The system identifies which gaps have the highest potential for catastrophic failure and weighs the score accordingly. Step 3: The SPOF Constraint This is the “fail-safe” of the maturity model. If a Single Point of Failure is detected, the domain score is automatically capped at Level 2 (Emerging). It does not matter how sophisticated your tools are or how high your traffic is; if the system relies on one person, it is not “Structured” (Level 3). Step 4: Domain Aggregation Individual domain scores are then averaged into an overall

Uncategorized

AI Mode is Google’s next ads engine — and it already knows how to monetize it

The landscape of digital advertising is undergoing its most significant transformation since the invention of the search engine itself. As conversational search gains rapid traction among consumers, the industry’s focus is shifting away from simple user acquisition toward the much more complex challenge of monetization. While many newcomers have entered the space, Google is positioning itself to dominate the next era of search through a sophisticated integration of artificial intelligence and its existing, world-class advertising infrastructure. Google’s “AI Mode” represents more than just a new feature; it is a fundamental evolution of the company’s core business model. For decades, Google has refined the art of matching user intent with commercial offerings. As we move into an era dominated by Large Language Models (LLMs) and conversational interfaces, Google enters the fray with a massive advantage: a mature ad ecosystem, deep advertiser adoption, and decades of data optimization that its competitors are only beginning to replicate. Early signals from the AI Mode rollout suggest a measured, strategic approach designed to preserve Google’s revenue streams while redefining how users interact with brands. The End of the Panic Phase: Google Regains Its Footing At the start of 2025, the consensus among tech analysts was that Google was in a state of crisis. The meteoric rise of ChatGPT and other LLM-based search tools led many to believe that the traditional search engine was a “dinosaur” waiting for extinction. During this period, Google reportedly issued an internal “Code Red,” signaling an existential threat to its dominance. This panic was reflected in the markets, where Google’s parent company, Alphabet, saw its shares tumble nearly 30% from their peak. However, the narrative has shifted dramatically. Through massive capital expenditures and the rapid deployment of its Gemini models, Google has successfully regained ground. By December 2025, the tables had turned; the “Code Red” was no longer a Google problem, but an OpenAI problem as the latter struggled to find a sustainable path to profitability. Today, Google’s valuation sits at approximately $3.6 trillion, trailing only Apple in market capitalization. This recovery was bolstered by a rally of over 130% from its lows, signaling that investors have regained confidence in Google’s ability to navigate the AI revolution. Perhaps the most significant validation of Google’s AI strategy came from Apple. In a move that shocked many who expected a Siri-OpenAI exclusive partnership, Apple chose Google to power key elements of its own AI ecosystem. This partnership underscores the reality that Google’s infrastructure and data reliability remain the gold standard in the tech industry, even in an AI-first world. Why Monetization Will Decide the Winner In the tech world, popularity is a vanity metric; monetization is the reality. The reason Google’s progress in LLM conversational queries—manifested through AI Overviews and AI Mode—has had such a profound impact on its valuation is simple: financial visibility. Investors and stakeholders needed to know if the shift from “ten blue links” to conversational answers would erode Google’s margins or strengthen them. For Google’s leadership and its CFO, the goal was to determine if changes in user behavior would weaken the business model. The early results suggest the opposite. Google before the shift was a titan; Google after the shift remains one, likely with even deeper integration into the consumer’s decision-making process. This visibility extends to advertisers as well. A significant portion of global digital advertising spend is allocated to Google Ads. While advertisers are exploring platforms like ChatGPT, Claude, and Perplexity, very few are willing to abandon the proven ROI of the Google ecosystem. As one industry veteran noted, no advertiser has ever said they are comfortable losing 30% of their Google-driven business to experiment with unproven, fragmented alternatives. Google’s strength lies in the fact that it doesn’t just provide a search result; it provides a comprehensive path to purchase that advertisers trust and understand. How Monetization Will Play Out in AI Search The competition between Google’s AI Mode and ChatGPT is not just a race for users; it is a battle of business models. There are several moving parts that will dictate who wins the monetization war: 1. Ad Formats and Native Integration The challenge of AI search is placing ads in a way that feels helpful rather than intrusive. Google has decades of experience with “native” advertising—making ads look and feel like part of the search experience. OpenAI is currently testing an auction model, but it remains limited to a small group of large advertisers and has been criticized for being “aggressive” in its early implementations. 2. The Pace of Rollout Google is taking a measured approach, slowly introducing AI Overviews into standard search results before pushing users toward the full AI Mode. This allows the company to gather data on user sentiment and ad performance without alienating its massive user base. In contrast, newer platforms are often forced to move faster to satisfy venture capital demands, which can lead to mistakes in user experience. 3. Advertiser Adoption and the Agency Ecosystem Google has an army of certified partners and agencies that know how to pull the levers of Google Ads. For a new platform to succeed, it must not only build a tool for users but also a dashboard for advertisers that provides the same level of granular control and reporting that Google offers. Currently, OpenAI is outsourcing some of its inventory to programmatic partners like Criteo and The Trade Desk—a pragmatic step, but one that highlights how far they are from having a self-contained, scalable ads business. 4. Data and Full-Funnel Journeys Google’s greatest advantage is its data across the entire funnel. Between Chrome, Maps, YouTube, and Gmail, Google knows more about the user journey than any other entity. This allows AI Mode to serve ads that are not just based on the current conversation, but on a holistic understanding of the user’s needs and past behaviors. AI Mode: Strategic Considerations for Advertisers For digital marketers, the transition to AI Mode should be viewed with curiosity rather than fear. While the interface

Uncategorized

Google Shares More Information On Googlebot Crawl Limits via @sejournal, @martinibuster

Understanding the Mechanics of Googlebot Crawl Limits For search engine optimization professionals and webmasters, the way Google interacts with a website is a primary focus of technical SEO. At the heart of this interaction is Googlebot, the sophisticated web-crawling software that discovers, analyzes, and indexes the vast expanse of the internet. Recently, Google has shared more detailed information regarding Googlebot’s crawl limits, emphasizing that these limits are not static. Instead, they are highly flexible, moving up or down based on the specific needs of the website and the capabilities of the hosting environment. The concept of a “crawl limit” is often bundled into the broader topic of “crawl budget.” While many smaller websites rarely have to worry about running out of crawl budget, for enterprise-level sites, large e-commerce platforms, and massive news publishers, understanding how Googlebot decides when to speed up or slow down is essential. The latest insights from Google clarify that the search engine aims for a balance: it wants to discover as much high-quality content as possible without overwhelming the server that hosts the site. What Is Googlebot and Why Does It Have Limits? Googlebot is the generic name for Google’s two types of crawlers: Googlebot Desktop and Googlebot Smartphone. Its primary mission is to traverse the web by following links, reading sitemaps, and identifying new or updated content to add to Google’s index. However, crawling is a resource-intensive process. Every time Googlebot visits a page, it consumes server bandwidth and processing power. If Googlebot were to crawl too aggressively, it could potentially slow down the site for human users or even cause a server crash. To prevent this, Google implements crawl limits. These are safety mechanisms designed to protect the “health” of a website’s server. The crawl limit is essentially the maximum number of simultaneous connections Googlebot can make to a site, as well as the delay between those connections. The recent revelation from Google engineers underscores that these limits are dynamic. They are not a “set it and forget it” metric but a fluctuating value that responds to real-time data. The Two Pillars of Crawl Management: Limit vs. Demand To understand the flexibility Google mentions, one must distinguish between two key concepts: the Crawl Rate Limit and Crawl Demand. 1. Crawl Rate Limit The Crawl Rate Limit is designed to ensure Googlebot doesn’t degrade the user experience on your site. If your server is fast and responds quickly to requests, the crawl rate limit generally increases. This means Googlebot can crawl more pages simultaneously. Conversely, if the server begins to slow down or returns error messages (like the 503 Service Unavailable status), Googlebot will automatically reduce its crawl rate limit to give the server room to recover. 2. Crawl Demand Even if a site has a high crawl rate limit because its server is incredibly fast, Googlebot might not crawl it frequently if there is no “demand” for the content. Crawl demand is driven by how popular the pages are and how often they are updated. If a site hasn’t changed in months and doesn’t receive many external signals of importance (like links or search traffic), Googlebot will lower its crawl demand. It doesn’t want to waste resources re-indexing content that hasn’t changed. The flexibility Google refers to involves the interplay between these two pillars. Googlebot is constantly recalculating the optimal point where it can satisfy its demand for content without exceeding the limit of what the server can handle. Factors That Influence Crawl Limit Flexibility Google has clarified that several technical factors directly influence whether your crawl limit will be increased or decreased. Understanding these factors allows SEOs to optimize their infrastructure for better visibility. Server Response Speed The most immediate factor is the Time to First Byte (TTFB) and the overall latency of the server. When Googlebot makes a request, it measures how long it takes for the server to respond. If the response is near-instant, Googlebot perceives the server as “healthy” and “capable.” This signals that the crawl limit can safely be increased. In contrast, high latency is a primary trigger for Googlebot to scale back its activity. Status Codes and Server Errors Googlebot pays close attention to HTTP status codes. If a site starts returning 5xx series errors (server-side errors), Googlebot interprets this as a sign that the server is struggling under the current load. In response, it will immediately decrease the crawl limit. Interestingly, even 429 (Too Many Requests) status codes are taken as a direct signal to slow down. Google’s flexibility means that once these errors subside and the server stabilizes, Googlebot will gradually begin to increase the crawl limit again, though this recovery isn’t always instantaneous. Site Quality and Update Frequency While the crawl limit is largely a technical constraint, the overall quality of the site influences the “flexible” nature of how Google allocates its resources. Sites that consistently produce high-quality, original content that users find valuable will naturally see higher crawl demand. Googlebot is “hungry” for this type of content and will push the crawl limit to its safe maximum to ensure the new information is indexed quickly. The Role of Google Search Console in Monitoring Crawl Limits Google provides a vital tool for webmasters to see exactly how these flexible crawl limits are being applied: the Crawl Stats Report in Google Search Console. This report offers a transparent look at how Googlebot sees your site’s infrastructure. Within the Crawl Stats report, users can see a breakdown of requests by response code, file type, and purpose (discovery vs. refresh). Most importantly, it provides a “Host Status” section. This section highlights whether Google encountered any issues with robots.txt fetching, DNS resolution, or server connectivity. If any of these metrics show a downward trend, it is a clear indicator that Googlebot has likely decreased your crawl limit. By resolving these technical bottlenecks, webmasters can encourage Googlebot to increase the limit back to its previous levels. The Crawl Rate Settings Tool Google still maintains a legacy tool within Search

Uncategorized

Google AI Overviews now appear on 14% of shopping queries: Report

The Rapid Rise of AI in the Retail Search Experience The landscape of search engine results pages (SERPs) is undergoing its most significant transformation in over a decade. For years, ecommerce brands relied on a predictable hierarchy of paid search ads, Google Shopping carousels, and organic listings. However, the integration of generative artificial intelligence into Google’s core search functionality—now known as AI Overviews (AIOs)—is fundamentally altering how consumers discover and evaluate products. A new comprehensive analysis from Visibility Labs reveals that AI Overviews now appear on 14% of all shopping-related queries. While that percentage might seem modest at first glance, the trajectory of this growth is staggering. In November 2025, AI Overviews were present on only 2.1% of these queries. In a matter of just four months, the prevalence of AI-generated summaries in the shopping sector has increased 5.6x. This shift indicates that Google is no longer merely “testing” AI in search; it is aggressively deploying it in the most high-value, high-intent sectors of the web. For ecommerce brands and digital marketers, the message is clear: the traditional SEO playbook is being rewritten in real-time. Understanding the Visibility Labs Data: A Deep Dive into 20 Million Queries To understand the scale of this shift, it is essential to look at the methodology behind the Visibility Labs report. The study was not based on a handful of niche terms but rather a massive dataset of 20,900,323 shopping keywords. The researchers specifically targeted “product-intent” keywords—those that typically trigger a Shopping box, whether paid or organic. These are the queries that sit at the bottom of the marketing funnel, where users are ready to make a purchase. Examples of terms analyzed included “weighted blanket,” “mushroom coffee,” “protein powder,” and “blue T-shirts.” Out of the nearly 21 million keywords analyzed, 2,919,229 triggered an AI Overview. This 14.0% penetration rate represents a critical tipping point. When more than one in ten high-intent searches are mediated by an AI summary, the impact on organic traffic patterns becomes impossible to ignore. Why the 5.6x Growth in Four Months Matters The speed of adoption is perhaps the most alarming metric for ecommerce retailers. A 560% increase in four months suggests that Google’s confidence in its AI models for retail is growing exponentially. In the early stages of AI Overviews, Google appeared cautious, primarily showing AI summaries for informative or “long-tail” queries where a direct answer was easy to provide. Shopping queries are more complex; they involve real-time pricing, stock levels, consumer reviews, and competitive comparisons. The fact that AIOs are now appearing on 14% of these queries suggests that Google’s AI is becoming better at synthesizing volatile commercial data into coherent recommendations. For brands, this rapid expansion creates a “visibility gap.” If a brand spent the last two years optimizing for the top organic spot for “protein powder,” but that spot is now pushed three scrolls down by an AI Overview, an ad block, and a Shopping carousel, their organic click-through rate (CTR) is likely to plummet despite their high ranking. The “Search Battlefield”: How AI Overviews Reorder the SERP The introduction of AI Overviews effectively creates a new layer at the very top of the search results. In a typical shopping query today, the user experience often follows this order: 1. **Sponsored Search Ads:** Paid placements that remain a primary revenue driver for Google. 2. **AI Overviews:** A generative summary that explains the product category, lists key features to look for, and often recommends specific products with links. 3. **Google Shopping / Popular Products:** A visual grid of product listings with prices and ratings. 4. **Traditional Organic Listings:** The standard “blue links” that have historically been the focus of SEO. By the time a user reaches the organic listings, they have already been presented with an AI-curated list of options and a set of paid ads. This structure poses a significant threat to “middle-man” review sites and ecommerce blogs that rely on organic traffic to drive affiliate sales or direct conversions. Which Shopping Queries are Most Affected? The Visibility Labs report highlights that AI Overviews are not distributed evenly across all product types. They are most prevalent in categories where consumers often require guidance or comparison. Keywords like “mushroom coffee” or “weighted blanket” are perfect candidates for AIOs because they represent “consideration-phase” products. A user searching for “mushroom coffee” might want to know about the benefits, the different types of fungi used (Reishi vs. Lion’s Mane), and which brands are the most reputable. The AI Overview can synthesize all of that information into a single box, potentially satisfying the user’s intent without them ever needing to click through to a specialized health blog or an individual brand’s education page. Conversely, very specific, branded “navigational” queries (e.g., “Nike Air Max 90 size 10”) are less likely to trigger a broad AI summary because the intent is already highly specific. The growth in AIOs is currently concentrated in the “discovery” phase of shopping, where the AI acts as a digital concierge. The Threat of AI-Driven Click Loss For years, the ecommerce industry has watched as Google moved toward a “zero-click” environment. Features like featured snippets and knowledge panels began the trend of answering queries directly on the search page. AI Overviews take this to an entirely new level. In a shopping context, click loss occurs when the AI Overview provides enough information for the user to make a decision—or leads them directly into a Google-owned checkout experience—bypassing the brand’s own site. If the AI Overview lists the “top 5 protein powders for muscle gain” and includes links directly to the product pages or a Google Shopping checkout, the informational “Top 10” articles that used to rank for those terms lose their purpose. Ecommerce brands that have relied heavily on “top-of-funnel” content—such as buying guides and comparison articles—are the most vulnerable. If Google’s AI can generate a buying guide on the fly, the need for a third-party guide diminishes. The Shift Toward “AI SEO”: A New Necessity Jeff Oxford,

Uncategorized

Small publisher search traffic fell 60% over two years: Data

The Shifting Landscape of Digital Publishing For more than two decades, the playbook for digital growth was clear: optimize for search engines, capture long-tail traffic, and scale through organic discovery. This model allowed small, independent publishers to compete with media giants by out-maneuvering them in the niches. However, new data suggests that this era of the open web is undergoing a seismic shift. Small publishers—those who formed the backbone of the specialized internet—are now facing a crisis of visibility. According to recent data from Chartbeat, an analytics platform used by thousands of global media sites, search referral traffic for small publishers has plummeted by a staggering 60% over the last two years. This decline highlights a growing divide in the digital ecosystem, where the “rich get richer” and smaller players are increasingly squeezed out of the search results pages they once dominated. Breaking Down the Data: Who is Losing the Most? The Chartbeat study categorized publishers by their daily pageview volume to determine how different tiers of the industry are weathering the changes in search algorithms and user behavior. The results reveal a direct correlation between site size and traffic stability. The smaller the site, the more devastating the loss. Publishers categorized as “small”—those generating between 1,000 and 10,000 daily pageviews—saw the most dramatic decline, with search referrals dropping 60% over a 24-month period. These are often the “passion projects,” niche hobbyist sites, and local news outlets that rely heavily on organic discovery to find new readers. Mid-sized sites, which Chartbeat defines as having 10,000 to 100,000 daily pageviews, did not fare much better. These publishers saw a 47% drop in search referral traffic. While they may have more established brand recognition than the smallest sites, they are clearly struggling to maintain their positions as Google and other search engines prioritize different types of content and interfaces. Large publishers, defined as those with more than 100,000 daily pageviews, have proven to be the most resilient, though they are by no means immune. This group saw search traffic decline by 22%. While still a significant loss, it is roughly one-third of the damage sustained by smaller competitors. This suggests that large-scale media brands possess a “moat” of authority and resources that protects them from the worst of the volatility. The Google Factor: Search and Discover in Decline The primary driver behind these numbers is the changing nature of Google. The data shows that Google Search pageviews specifically fell 34% year-over-year. This isn’t just a matter of people searching less; it’s a matter of how Google is presenting information. With the integration of AI Overviews and a heavy emphasis on “zero-click” searches, users are often getting the answers they need directly on the search results page without ever clicking through to a publisher’s website. Google Discover, once hailed as a “traffic firehose” for publishers, has also become increasingly unreliable. The data indicates a 15% drop in traffic from Discover. For many small and mid-sized publishers, Discover was a vital source of viral traffic that could make up for slow search days. Its decline suggests that Google’s recommendation algorithms are becoming more selective or are prioritizing video and social-first content over traditional web articles. The AI Mirage: Is ChatGPT Replacing Search? Much has been made of the rise of generative AI as the next frontier of information retrieval. The Chartbeat data does show a massive surge in referrals from AI platforms like ChatGPT, which rose by 200%. At first glance, this looks like a burgeoning new traffic source for publishers to exploit. However, the “reality check” is sobering. Despite a 200% growth rate, traffic from ChatGPT and similar AI chatbots still accounts for less than 1% of total publisher traffic. While AI may be changing how users interact with information, it is currently failing to serve as a meaningful replacement for the massive volume of referral traffic lost from traditional search engines. The problem is structural: search engines were designed to be directories that point users elsewhere, whereas AI chatbots are designed to be destinations that synthesize information. When a chatbot provides a comprehensive answer, the user has very little incentive to click a citation link to visit the original source. Where is the Traffic Going? While the search referral numbers are alarming, there is a silver lining in the broader data. Total weekly pageviews across all publishers fell by only 6% from 2024 to 2025. This is a relatively minor dip that can often be attributed to fluctuations in the news cycle or seasonal trends. If search traffic is down significantly but total pageviews are relatively stable, it means the audience hasn’t disappeared—they have simply changed how they find content. The data points to a shift toward direct, internal, and messaging channels. People are increasingly visiting their favorite sites directly, following links in newsletters, or sharing content within private messaging apps like WhatsApp and Slack. This shift represents a transition from a “discovery-based” internet to an “intent-based” or “relationship-based” internet. Users are bypassing the middleman (Google) and going straight to the sources they trust. While this is a positive development for brand loyalty, it makes it incredibly difficult for new or small publishers to break through and grow a new audience. Why Small Publishers are at the Highest Risk Small publishers are uniquely vulnerable in this new environment for several reasons. First, they often lack the “brand moat” that large publishers enjoy. If a user wants to know the latest news on a major political event, they might type “nytimes.com” directly into their browser. A small niche site rarely enjoys that level of direct intent; they rely on someone searching for a specific topic and happening upon their article. Second, small publishers are often the most impacted by Google’s recent “Helpful Content” and “Core” updates. These updates have increasingly favored sites with high “EEAT” (Experience, Expertise, Authoritativeness, and Trustworthiness). In many cases, Google’s algorithms equate authority with site size and historical longevity, making it difficult for smaller, specialized sites to outrank

Uncategorized

Google retires several legacy ad format policies

Understanding the Shift in Google Ads: The End of Legacy Policy Frameworks The digital advertising landscape is in a constant state of flux, driven by rapid advancements in machine learning and a shift toward simplified, automated campaign management. In a significant move to streamline its ecosystem, Google has officially announced the retirement of several legacy ad format policies. Effective March 17th, these changes mark a definitive step toward a more unified and modernized advertising platform. For years, Google Ads has maintained a complex web of requirements tailored to specific, often manual, ad formats. As the platform has evolved from static text strings and simple banners into dynamic, AI-powered experiences like Performance Max and Responsive Search Ads (RSAs), many of these older rules have become redundant or conflicting. By removing these outdated frameworks, Google aims to reduce the “policy friction” that advertisers often face when launching new campaigns. This update isn’t just about deleting old text; it represents a fundamental change in how Google views ad quality and compliance. In the modern era of Google Ads, the focus has shifted away from rigid, format-specific constraints and toward broader, more holistic standards that apply across the entire Google network. This allows the platform’s AI to function more effectively, optimizing creative assets without being hindered by rules designed for the web of a decade ago. What Specific Policies Are Being Discontinued? The retirement of these policies targets four primary areas of the Google Ads experience. While these formats still exist in some capacity—often as parts of larger, automated systems—the specific legacy policy structures governing them are being phased out. These areas include: 1. Lead Form Ads and Extensions Lead form ads have evolved significantly since their inception. Originally managed under a specific set of niche requirements, lead generation is now a core component of both Search and Discovery (now Demand Gen) campaigns. The legacy policies once used to govern the granular mechanics of these forms are being retired in favor of more streamlined data privacy and user experience standards that apply to all lead-generation tools within the Google suite. 2. Image Quality Standards In the early days of the Google Display Network, image quality was governed by very specific, manual rules regarding resolution, borders, and text overlays. With the rise of Responsive Display Ads (RDAs) and Performance Max, Google’s AI now automatically crops, scales, and optimizes images to fit various placements across the web and mobile apps. The legacy “Image Quality” policy was built for a world of static banners; today’s world requires a more flexible approach where the AI handles the heavy lifting of visual presentation. 3. Responsive Ads Frameworks Responsive ads were once the “new” kid on the block, requiring their own unique set of policies to manage how multiple headlines and descriptions were mixed. Now that Responsive Search Ads (RSAs) have become the default standard for Search, and Expanded Text Ads (ETAs) have been sunset, the “legacy” responsive policies are no longer necessary. They have been absorbed into the standard Google Ads policy framework, reflecting the fact that “responsive” is no longer a special category—it is simply the way the platform operates. 4. Legacy Text Ad Requirements Perhaps the most significant change for long-time advertisers is the final removal of legacy policies tied to old-school text ads. For years, Google maintained rules specifically for standard text ads and Expanded Text Ads. Since advertisers can no longer create or edit these formats in most contexts, maintaining separate policy documents for them was creating unnecessary clutter for modern marketers. This cleanup ensures that when an advertiser looks for text requirements, they are only seeing rules relevant to the current AI-driven formats. The Driver Behind the Change: Automation and AI The primary catalyst for retiring these legacy policies is Google’s aggressive push toward automation. In a traditional manual bidding and ad creation environment, rigid policies were necessary to ensure a consistent user experience. However, in an AI-driven environment, the system needs more “room to breathe” to find the best combination of assets for each individual user. When Google’s algorithms determine which ad to show a user on YouTube, Gmail, or a Search results page, it considers millions of signals in real-time. Legacy policies that dictated exact image aspect ratios or specific text placements were often at odds with the AI’s ability to optimize. By simplifying the policy landscape, Google is essentially “clearing the tracks” for its machine learning models to operate with fewer artificial constraints. Furthermore, this update reflects the consolidation of Google’s various ad products. In the past, Search, Display, and YouTube each had their own distinct policy “manuals.” Today, with the rise of cross-channel products like Performance Max, those silos are disappearing. A unified policy framework is a prerequisite for a unified campaign type. What This Means for Advertisers and Agencies For most advertisers, this change will be a welcome simplification. Navigating the labyrinth of Google Ads policies has long been a pain point for digital marketing agencies and small business owners alike. By removing legacy baggage, Google is making it easier to understand what is currently required for a campaign to be approved and successful. However, “retired” does not mean “anything goes.” Advertisers must still adhere to the primary Google Ads Policy framework, which covers prohibited content, restricted businesses, and editorial standards. The focus should now shift toward: Adopting Current Best Practices Instead of worrying about the specific legacy rules for a text ad, marketers should focus on the quality of their assets within Responsive Search Ads. This means providing the maximum number of headlines and descriptions and ensuring that each asset is distinct and compelling. The system’s “Ad Strength” meter is now a more valuable guide than the legacy policy documents of the past. Prioritizing High-Resolution Visuals While the old, rigid image quality policies are being retired, the importance of visual quality has never been higher. Google’s AI performs best when it has high-quality “raw material” to work with. Advertisers should focus on providing high-resolution images that are

Uncategorized

SEO Vs. PPC Strategy: What’s Right For Your Business? via @sejournal, @brookeosmundson

The Digital Dilemma: Choosing Between SEO and PPC In the modern digital landscape, the search engine results page (SERP) is the most valuable real estate on the internet. For business owners, marketers, and stakeholders, the core challenge is determining how to capture that space most effectively. This decision usually boils down to a choice—or a balance—between two primary disciplines: Search Engine Optimization (SEO) and Pay-Per-Click (PPC) advertising. While both strategies aim to drive traffic and increase conversions through search engines like Google and Bing, they function on fundamentally different mechanics. SEO is the art and science of earning organic visibility through relevance and authority. PPC, on the other hand, is a model where businesses pay for a top-tier position via an auction system. Choosing the right path requires a deep dive into your business goals, budget, competitive landscape, and timeline. What is Search Engine Optimization (SEO)? SEO is the process of optimizing your website to rank higher in the “organic” or non-paid section of search engine results. It is a long-term strategy that focuses on providing the best possible answer to a user’s query. Because search engines want to provide value to their users, they reward sites that demonstrate expertise, authoritativeness, and trustworthiness (E-E-A-T). The Three Pillars of SEO To succeed in SEO, a business must master three distinct areas: 1. Technical SEO: This ensures that search engine crawlers can easily index and understand your site. It involves site speed optimization, mobile-friendliness, secure connections (HTTPS), and a clean site structure. If your technical foundation is weak, your content may never see the light of day. 2. On-Page SEO: This is the content that users actually see. It includes keyword research, high-quality copywriting, meta descriptions, and header tags. The goal is to align your content with the intent of the searcher. 3. Off-Page SEO: Often equated with link building, off-page SEO is about building the reputation of your site. When other reputable websites link to your content, it acts as a “vote of confidence,” signaling to search engines that your site is an authority in your niche. The Advantages of SEO The primary draw of SEO is its long-term sustainability. Unlike paid ads, organic traffic doesn’t stop the moment you stop investing. Once you have established a high ranking for a valuable keyword, that position can continue to drive traffic for months or even years with minimal maintenance. Furthermore, SEO carries a level of credibility that paid ads often lack. Many users have developed “banner blindness” and instinctively skip over the sponsored results to find the top organic listings. High organic rankings signal to the user that your brand is a leader in the industry, fostering trust before the user even clicks on your link. From a cost perspective, SEO offers a higher potential ROI over time. While the upfront costs of content creation and technical fixes can be significant, the “cost per lead” typically drops as your organic authority grows. You aren’t paying for every individual click; you are investing in a digital asset that grows in value. What is Pay-Per-Click (PPC)? PPC is a digital advertising model where advertisers pay a fee each time one of their ads is clicked. Essentially, it’s a way of buying visits to your site, rather than attempting to “earn” those visits organically. The most prominent platform for this is Google Ads, which places sponsored listings at the very top and bottom of the search results. How the PPC Auction Works PPC is not just about who has the most money. It operates on an auction system that considers both the bid amount and the “Quality Score” of the ad. Quality Score is determined by the relevance of your ad to the search query, the click-through rate (CTR), and the quality of the landing page. This means a well-optimized ad can often beat a competitor who is bidding more but providing a poorer user experience. The Advantages of PPC The most significant advantage of PPC is speed. While SEO can take six months to a year to show significant results, a PPC campaign can be launched in an afternoon and start driving traffic within minutes. This makes PPC an ideal choice for new product launches, seasonal promotions, or businesses that need to generate revenue immediately. PPC also offers unparalleled targeting capabilities. You can specify exactly who sees your ads based on geography, time of day, device type, and even specific demographics. If you want to target users in a five-mile radius of your store who are searching for “emergency plumbing” at 3:00 AM on a Tuesday, PPC allows you to do exactly that. Additionally, PPC provides a controlled environment for testing. You can run A/B tests on headlines, calls to action, and landing pages to see exactly what resonates with your audience. The data gathered from PPC campaigns—such as which keywords actually lead to sales—is incredibly valuable and can even be used to inform your broader SEO strategy. SEO Vs. PPC: Comparing the Key Metrics To decide which strategy fits your business, you must compare how they perform across several critical business metrics. 1. Time to Results SEO is a marathon; PPC is a sprint. If your business is in a “growth at all costs” phase and needs leads today to survive, PPC is the clear winner. However, if you are building a brand for the next decade, SEO is the foundation you cannot afford to ignore. 2. Cost and Budgeting With PPC, your costs are highly predictable but linear. If you want twice as much traffic, you generally have to spend twice as much money. With SEO, the costs are front-loaded. You might spend $5,000 a month for six months with zero return, but by month twelve, you could be receiving $50,000 worth of “free” traffic every month. SEO scales exponentially, while PPC scales linearly. 3. Click-Through Rates (CTR) Statistically, organic results receive the vast majority of clicks. While the top PPC ad might get a 2-3% CTR, the top organic result

Uncategorized

SMX Now: Learn how brands must adapt for AI-driven search

The Paradigm Shift: Moving Beyond Traditional Rankings The digital marketing landscape is currently undergoing its most significant transformation since the invention of the search engine itself. For decades, the goal of search engine optimization (SEO) was clear: rank as high as possible in the “ten blue links.” If you were on page one, you were winning. If you were in the top three results, you were thriving. However, the rise of large language models (LLMs) and generative AI has fundamentally altered the mechanics of discovery. Visibility in the modern era is no longer just a matter of keyword density or backlink profiles. Today, your brand’s success depends on whether your content is discovered, evaluated, and ultimately selected by AI-driven search experiences. Systems like Google’s Search Generative Experience (SGE), Perplexity AI, and OpenAI’s search capabilities do not just list websites; they synthesize information. They act as curators, deciding which sources are authoritative enough to be cited and which should be ignored. To help brands navigate this complex new reality, the upcoming SMX Now webinar series is launching with a deep dive into the strategies required to survive and thrive in an AI-first world. On April 1 at 1 p.m. ET, industry leaders from iPullRank—Zach Chahalis, Patrick Schofield, and Garrett Sussman—will lead a session focused on how brands must adapt their digital presence to meet these evolving standards. Understanding AI-Driven Search Experiences Traditional search engines work like a massive library index. They crawl the web, index pages based on keywords and technical signals, and then retrieve them when a user enters a matching query. AI-driven search, or “Generative Search,” functions more like a research assistant. It doesn’t just find the pages; it reads them, understands the context, and generates a cohesive answer for the user. In this environment, the “winner” isn’t necessarily the site with the highest Domain Authority. Instead, the winner is the source that provides the most relevant, structured, and verifiable data that the AI can easily digest and repurpose. This shift has given rise to a new discipline known as Generative Engine Optimization (GEO). The SMX Now session will explore why GEO is the natural evolution of SEO. While SEO was about optimizing for algorithms that rank, GEO is about optimizing for models that reason. This requires a shift in mindset from “how do I rank for this keyword?” to “how do I ensure this AI model trusts my information enough to use it in its answer?” Introducing the r19g Framework: Relevance Engineering One of the highlights of the upcoming webinar is the introduction of iPullRank’s proprietary Relevance Engineering framework, often abbreviated as r19g. This framework is designed to bridge the gap between traditional technical SEO and the requirements of modern AI models. Relevance Engineering focuses on the intersection of data science, linguistics, and search technology. It moves beyond the surface-level optimization of meta tags and headings. Instead, it looks at how content is structured to be “retrieval-friendly.” In a world dominated by Retrieval-Augmented Generation (RAG), AI models don’t just “know” things; they pull from a vast vector database of real-time information. The r19g framework provides a roadmap for executing an omnichannel content strategy that ensures your brand’s information is the primary source retrieved by these models. By focusing on relevance at a granular level, brands can ensure their content isn’t just indexed by Google, but is actually “understood” by the LLMs that power search summaries. The Concept of Query Fan-Outs in AI Search To optimize for AI, marketers must first understand how these models process user intent. A traditional search engine might take a long query and break it down into core keywords. An AI model, however, performs what is known as a “query fan-out.” When a user asks a complex question, the AI doesn’t just search for that specific string of text. It generates several underlying sub-queries to gather a comprehensive set of data points. For example, if a user asks, “What is the best mid-range laptop for video editing in 2024?” the AI might fan out into queries regarding: – Current top-rated laptops under $1,200. – Hardware requirements for software like Adobe Premiere or DaVinci Resolve. – Recent reviews from reputable tech sites published within the last six months. – Comparisons of GPU performance in mid-range chipsets. The SMX Now session will explain how AI search uses these query fan-outs to discover and select sources. By understanding this process, brands can create content that addresses not just the primary question, but the likely “sub-questions” that an AI will ask during its research phase. This increases the likelihood of being cited as a comprehensive and authoritative source. Retrieval, Surfacing, and Citation: The New Funnel In the world of AI-driven search, the traditional marketing funnel has been replaced by a new process: Retrieval, Surfacing, and Citation. 1. Retrieval: This is the first hurdle. Is your content formatted in a way that an AI’s retrieval system can find it? This involves technical optimizations like schema markup, clean HTML structures, and the use of semantic entities that help an AI categorize your content correctly within its vector space. 2. Surfacing: Once the AI has found your content, it must decide if it is relevant enough to “surface” for the specific prompt. This is where Relevance Engineering (r19g) becomes critical. The model evaluates the depth, accuracy, and context of your information against other retrieved sources. 3. Citation: This is the ultimate goal. The AI doesn’t just use your information; it provides a link and a brand mention, driving traffic back to your site. Getting cited by an AI summary is the new “Position Zero.” It establishes your brand as the definitive authority on the topic. The webinar will provide actionable steps on how to structure content specifically to hit these three stages. This includes strategies for using structured data, clear and concise language, and authoritative evidence that makes it easy for a machine to verify your claims. Why GEO Success Isn’t Universal One of the most important takeaways from the

Scroll to Top