Uncategorized

Uncategorized

Your ROAS looks great — but is it actually driving growth?

The Dangerous Seduction of High ROAS Every digital marketer has experienced the rush of checking a dashboard and seeing a Return on Ad Spend (ROAS) that looks like a statistical anomaly. A 10x, 15x, or even 20x return suggests that for every dollar you put into the machine, twenty dollars in revenue are pouring out the other side. In many boardrooms, this is the cue to open the champagne and double the budget. However, for the sophisticated growth marketer, a high ROAS is not always a cause for celebration. Sometimes, it is a warning sign. The fundamental question isn’t just “What is the return?” but rather, “Would this revenue have existed without the ad?” The gap between reported performance and actual incremental growth is where millions of marketing dollars are lost every year. When an ecommerce company hires a PPC agency, the honeymoon period usually consists of high conversion volumes and a healthy ROAS. On the surface, the strategy is a resounding success. But if you look closer, you might find that the campaign is simply standing in front of a door that was already open. If those conversions would have occurred anyway via direct visits or organic search, the paid campaigns are merely taxing the business rather than growing it. The eBay Experiment: A Lesson in Causal Lift To understand the limitations of ROAS, we must look at one of the most famous case studies in the history of paid search: the eBay experiment. In 2013, researchers from the University of California, Berkeley, teamed up with eBay to analyze the effectiveness of the company’s massive spend on branded search terms. At the time, eBay was spending millions of dollars bidding on its own name. Their internal metrics showed a massive ROAS. However, the researchers conducted a controlled experiment: they turned off paid search ads for the keyword “eBay” in specific geographic regions while keeping them active in others. The results were startling. In the regions where the ads were turned off, organic traffic picked up nearly 100% of the lost clicks. The revenue remained almost identical. The conclusion was clear: eBay was paying for traffic it already owned. Despite this evidence, many brands continue to spend heavily on brand keywords. Sometimes this is a defensive move to prevent competitors from poaching the top spot, but often it is a “safe” way to inflate reported ROAS. Platforms love these campaigns because they provide high-confidence conversions, but from a business growth perspective, they represent zero incremental value. The Black-Box Trap: Performance Max and Advantage+ As digital advertising moves toward total automation, the difficulty of measuring true growth has intensified. Modern advertising tools, such as Google’s Performance Max (P-Max) and Meta’s Advantage+, are essentially black boxes. They use machine learning to find the users most likely to convert, but they don’t necessarily prioritize finding *new* customers. Algorithms are designed to achieve the goal you set for them. If you tell an algorithm to maximize ROAS, it will find the path of least resistance to a conversion. Often, this path leads straight to your existing customers. Automation thrives on “safe” signals, which often results in the following: Brand Search Cannibalization: Algorithms bid aggressively on your brand name because those users are the most likely to buy. Aggressive Retargeting: The system serves ads to users who have already added items to their cart and were seconds away from checking out. Reporting Bias: Platforms claim credit for any user who saw an ad and eventually purchased, even if the ad had no influence on their decision. Without a way to measure incrementality, automation simply amplifies these non-incremental signals. You may see your ROAS climb, but your total business revenue remains stagnant. You aren’t scaling your business; you are scaling your platform spend. Incrementality: Measuring Causal Impact Incrementality is the gold standard for measuring marketing effectiveness. It refers to the “causal lift” created by a specific campaign. In simpler terms, it answers the question: “What changed because this campaign existed?” While platform attribution tells you which channel was the last touchpoint before a sale, incrementality tells you if the sale would have happened in a world where that channel was turned off. This is a much more useful lens for budget allocation. A channel can have a fantastic in-platform ROAS and still generate a weak incremental impact if it is merely harvesting demand rather than creating it. Think of it this way: Attribution is like a scoreboard in a basketball game. It tells you who took the last shot. Incrementality is like an advanced scouting report. It tells you how much better the team performs when a specific player is on the court versus when they are on the bench. If the team scores the same amount of points regardless of whether that player is playing, that player’s “incrementality” is zero, regardless of how many shots they take. The Difference Between Demand Generation and Demand Capture To master incrementality, you must distinguish between campaigns that create new demand and those that capture existing demand. High-funnel activities, such as YouTube awareness ads or social media prospecting, often have lower reported ROAS because they are introducing people to the brand for the first time. However, their incrementality is often very high because they are moving people who would never have considered your brand into the sales funnel. Conversely, bottom-of-funnel activities like branded search and retargeting often have astronomical ROAS but low incrementality. They are simply capturing the demand that was created by your high-funnel activities, your brand reputation, or word-of-mouth. The Hidden Metric: Marginal ROAS Even if you prove that a channel is incremental, you still need to know how much to spend on it. This is where Marginal ROAS comes into play. Marginal ROAS measures the return on the *next* dollar of spend, rather than the average return across the entire budget. Every marketing channel is subject to the law of diminishing returns. The first $1,000 you spend usually targets your “low-hanging fruit”—your most loyal customers

Uncategorized

Shorter, Focused Content Wins In ChatGPT via @sejournal, @Kevin_Indig

The New Paradigm of AI-Driven Search Optimization For more than a decade, the mantra of the SEO industry was “bigger is better.” The “Skyscraper Technique” encouraged creators to find the most comprehensive piece of content on a topic and then double its length, adding more images, more subheadings, and more data points. The goal was to create the “Ultimate Guide”—a massive, all-encompassing resource that search engines like Google would view as the definitive authority on a subject. However, as we enter the era of Generative AI and tools like ChatGPT, SearchGPT, and Google’s AI Overviews, the rules of the game are undergoing a fundamental shift. Recent data and analysis from industry experts like Kevin Indig indicate a surprising trend: shorter, more focused content is increasingly winning the citation battle within ChatGPT. This shift marks a departure from traditional search engine optimization (SEO) toward what many are calling Generative Engine Optimization (GEO). In this new landscape, the ability to provide a precise, direct answer to a specific user intent is becoming more valuable than the ability to cover twenty different subtopics in a single URL. How ChatGPT Processes and Cites Information To understand why focused content is outperforming exhaustive guides, we must first understand how Large Language Models (LLMs) like ChatGPT interact with the web. Unlike traditional search engines that index keywords and rank pages based on backlinks and dwell time, ChatGPT uses a process often referred to as Retrieval-Augmented Generation (RAG). When a user asks a question, ChatGPT doesn’t just display a list of links. It searches the web for relevant “chunks” of information, pulls that data into its context window, and synthesizes a narrative response. If your content is cited, it’s because the AI determined that your specific paragraph or section was the most accurate and concise answer to the user’s prompt. When a page is 5,000 words long and covers fifteen different subtopics, the “signal-to-noise” ratio can become diluted. The AI must sift through thousands of words of “fluff” or tangential information to find the relevant data. Conversely, a shorter piece of content—perhaps 600 to 1,000 words—that focuses exclusively on one specific subtopic provides a much cleaner signal. This makes it easier for the AI to identify your content as the primary authority for that specific query. The Data Behind the Shift: Why Fewer Subtopics Win The research highlighting this trend suggests a strong correlation between topical focus and citation frequency. In large-scale data analyses of ChatGPT’s browsing behavior, pages that stayed strictly “on-topic” were cited significantly more often than comprehensive pillar pages. There are several technical and psychological reasons for this: First, there is the issue of context window limitations. While AI models are becoming more powerful, they still have a finite amount of “attention” they can give to a single source during the retrieval phase. A highly focused article allows the model to ingest the entire context of the page without exceeding its processing limits or losing the core message in a sea of secondary information. Second, focused content reduces “topic dilution.” In the world of traditional SEO, we often talked about “keyword cannibalization.” In the world of AI, we are seeing “intent dilution.” If a page tries to answer “What is SEO?”, “How to do SEO?”, and “The History of SEO” all at once, the AI may find it less authoritative for a specific query about “SEO history” compared to a page that is 100% dedicated to that single historical timeline. The Death of the ‘Ultimate Guide’ Era? Does this mean the “Ultimate Guide” is dead? Not necessarily, but its role is changing. In the past, the Ultimate Guide served as a “one-stop shop” for users and a “link magnet” for webmasters. While these pages may still earn backlinks and rank in traditional Google Search results, they are increasingly struggling to capture the “Citation” or “Source” box in AI-generated responses. The problem with exhaustive guides is that they often prioritize breadth over depth. They provide a high-level overview of many things but may lack the granular, specific details that an AI needs to answer a complex, multi-step prompt. As users move away from searching for simple keywords and toward asking complex questions, they are looking for specific solutions. ChatGPT mirrors this behavior by seeking out the most direct answer available. If your content requires the user (or the AI) to scroll past three sections of “What is [X]?” to get to the “How to fix [X]” section, you are at a disadvantage compared to a site that has a dedicated page for “How to fix [X].” Generative Engine Optimization (GEO): A New Strategy To adapt to these findings, digital publishers and SEO professionals need to rethink their content architecture. This transition to Generative Engine Optimization requires a shift in how we plan our content calendars and structure our articles. 1. Prioritize Intent-Specific URLs Instead of creating one massive page that covers an entire industry, break your content down into “intent clusters.” Each URL should solve one specific problem or answer one specific question. If you are writing about “Gaming Laptops,” don’t just make one page for “Best Gaming Laptops 2024.” Create specific, shorter pieces for “Best Gaming Laptops for Ray Tracing,” “Best Budget Gaming Laptops Under $1000,” and “Most Portable Gaming Laptops.” 2. The Power of the ‘Niche-Down’ The data shows that ChatGPT favors experts. By narrowing the focus of your content, you signal to the AI that your page is a specialized resource rather than a generalist overview. Specialized resources are perceived as more reliable and are thus more likely to be cited as a source of truth. 3. Use Modular Content Structures Even within a shorter, focused piece, structure remains vital. Use clear, descriptive H2 and H3 headings that mirror the way people ask questions. Instead of a heading like “Battery Life,” use “How long does the battery last on the [Product Name]?” This makes it incredibly easy for an AI’s retrieval algorithm to “hook” onto your content and pull it into

Uncategorized

How to optimize for keywords you can’t use

In the world of search engine optimization, we are often taught that the golden rule is alignment. We align our content with user intent, and we align our on-page copy with the specific keywords people are typing into the search bar. But what happens when the very keywords driving the most traffic are the ones your brand, your legal department, or your industry standards forbid you from using? This is a high-stakes challenge that many SEO professionals face, particularly in niche markets, regulated industries, or when dealing with powerful trademarks. You are tasked with capturing massive search demand while simultaneously being told that the primary search term is off-limits. It feels like trying to win a race with one hand tied behind your back. However, modern search engines are smarter than they used to be. We are no longer in the era of exact-match keyword stuffing. Today, search is about entities, context, and semantic meaning. It is entirely possible to rank for a term without making it your primary headline—or in some cases, without using it as a descriptor for your own product at all. Here is how to navigate the complex landscape of optimizing for keywords you can’t use. The Conflict: User Behavior vs. Brand Guidelines The disconnect usually happens because of a gap between how people actually speak and how a business wants to be perceived. This conflict typically falls into three categories: trademark restrictions, industry stigma, and internal brand evolution. Trademarks are perhaps the most common hurdle. Consider the term “Koozie.” While millions of people use the word “koozie” to describe any foam sleeve that keeps a canned drink cold, “Koozie” is actually a registered trademark. If you are a manufacturer of similar products but do not own that trademark, using it prominently as a product name could land you in legal hot water. Yet, the search volume for “custom koozies” dwarfs the volume for “custom can coolers.” Industry stigma is another common driver. In the senior living sector, for instance, the term “nursing home” carries a massive amount of search volume. However, many modern facilities prefer the terms “skilled nursing,” “assisted living,” or “continuing care retirement communities” because “nursing home” is often associated with outdated, clinical environments. The dilemma is clear: if you don’t use the term “nursing home,” you miss out on the majority of the market searching for your services. If you do use it, you risk alienating your target audience or violating brand positioning. Regardless of the reason, the goal remains the same: you must bridge the gap between the searcher’s vocabulary and the brand’s vocabulary. 1. Leverage Data to Negotiate the Terms Before diving into creative workarounds, your first step should always be a thorough data audit. Sometimes, stakeholders refuse to use a term because they don’t realize how much opportunity they are leaving on the table. Presenting hard numbers can often soften a rigid stance or at least open the door for “controlled” usage of a term. When you show a client that “skilled nursing near me” attracts 4,400 monthly searches while “nursing home near me” attracts over 27,000, the conversation changes from a matter of “preference” to a matter of “revenue.” Use tools like Semrush, Ahrefs, or Google Keyword Planner to pull localized data. If a specific term is the lifeblood of the industry’s search traffic, you might be able to negotiate its use in specific, less-prominent areas of the site, such as a blog post or a deep-level FAQ page, rather than the homepage H1 tag. Confirm the level of restriction. Is the term “never to be seen on the site,” or is it simply “not our primary descriptor”? Understanding the boundaries allows you to maximize the remaining surface area for optimization. 2. Build a Semantic Web Around the Term Search engines like Google use Latent Semantic Indexing (LSI) and sophisticated AI models to understand the “neighborhood” of a keyword. If you can’t use the word “Koozie,” you can still use every other word that is traditionally associated with it. By building a rich context of related terms, you signal to the search engine exactly what the page is about without ever needing to say the “forbidden” word. For a drink cooler, this means using terms like “insulated sleeves,” “can chillers,” “neoprene foam,” “keep drinks cold,” and “tailgating accessories.” If the page discusses bachelorette parties, weddings, outdoor barbecues, and custom printing for foam sleeves, Google’s algorithms are smart enough to categorize that page under the “koozie” umbrella. You are essentially painting a picture of the keyword without drawing the lines. 3. Deconstruct Phrases and Use Component Keywords If your target keyword is a multi-word phrase, you can often gain traction by using the individual components of that phrase frequently throughout the copy, even if they never appear together in the exact restricted order. Take the “nursing home” example. If you cannot use the phrase “nursing home” as a compound noun, you can still discuss the high quality of your “nursing” care and the “home-like” environment of your facility. By using “nursing” and “home” as separate entities within the same semantic space, you provide the building blocks for the search engine to correlate your content with the search query “nursing home.” This approach keeps your brand voice intact—you are talking about your “nursing” services and your “residential home”—while still checking the boxes for the search engine’s indexing process. 4. Use Indirect References and Comparison Logic One of the most effective ways to include a restricted keyword is to use it in a way that differentiates your product from the common term. This allows the keyword to appear on the page for SEO purposes without the brand claiming the term as its own. Headers and subheaders are great places for this. A senior living facility might use a heading like “Why Families Choose Our Community Over a Traditional Nursing Home.” This phrasing is natural, provides value to the reader, and places the high-volume keyword “nursing home” directly into an H2

Uncategorized

Google Lists 9 Scenarios That Explain How It Picks Canonical URLs via @sejournal, @martinibuster

Introduction to Canonicalization in Modern SEO In the complex ecosystem of search engine optimization, one of the most critical yet frequently misunderstood concepts is canonicalization. At its core, canonicalization is the process by which a search engine like Google decides which version of a duplicate or near-duplicate page should be treated as the authoritative “master” version. While this sounds straightforward, the reality is that Google uses a sophisticated blend of signals to make this determination, often looking far beyond the simple tags provided by webmasters. Google’s John Mueller has recently shed light on the specific scenarios and signals that the search engine uses to identify canonical URLs. Understanding these scenarios is vital for SEO professionals and site owners who want to ensure that their preferred pages are the ones appearing in search results, accumulating link equity, and being prioritized for crawling. When multiple URLs point to the same content, search engines face a dilemma: which URL should be indexed and ranked? If left unresolved, this can lead to issues with crawl budget efficiency, diluted page authority, and an inconsistent user experience. By mastering how Google picks canonical URLs, you can take control of your site’s visibility and technical health. The Concept of the “User-Declared” vs. “Google-Selected” Canonical Before diving into the specific scenarios, it is important to distinguish between the two types of canonicals recognized by Google. The first is the **user-declared canonical**. This is the URL that you, as the site owner, tell Google you prefer. This is typically done through the rel=”canonical” link element in the HTML head. It serves as a strong suggestion to the search engine. The second is the **Google-selected canonical**. This is the URL that Google’s algorithms actually choose to index and display in the Search Engine Results Pages (SERPs). While Google tries to respect the user-declared canonical, it is not an absolute directive. If other technical signals point toward a different URL, Google will override your choice. This is where Mueller’s nine scenarios become essential for diagnosing why your preferred URLs might not be showing up as expected. 1. The Presence of the Rel-Canonical Link Element The most obvious and direct signal is the rel=”canonical” link element. This tag is placed in the <head> section of a webpage and points to the preferred URL. Mueller emphasizes that while this is a primary signal, its effectiveness depends on consistency. If you have a canonical tag pointing to Page A, but Page A itself points to Page B, you create a canonical loop or conflict. Google looks for clear, non-conflicting signals. If the tag is present and matches the content of the page, Google is highly likely to honor it, provided other signals don’t contradict it. 2. Redirects as a Definitive Signal Redirects are perhaps the strongest signal you can send to Google regarding your canonical preferences. When a 301 (permanent) redirect is implemented, you are explicitly telling the search engine that the old URL has moved and that the new destination is the one that should be indexed. Google views a redirect as a clear instruction. If URL A redirects to URL B, Google will almost always treat URL B as the canonical version. This is particularly useful during site migrations, URL structure changes, or when merging duplicate content. However, Mueller notes that even 302 (temporary) redirects can eventually lead to a change in the canonical URL if they are left in place for an extended period, as Google may interpret them as permanent. 3. Internal Linking Patterns One of the more subtle signals Google analyzes is how you link to your own content internally. Every internal link on your website acts as a small “vote” for a particular URL. If your rel=”canonical” tag points to a URL with a trailing slash (example.com/page/), but your navigation menu and body content consistently link to the version without a slash (example.com/page), Google receives conflicting signals. In many cases, Google will prioritize the URL that is linked to most frequently within the site architecture. To ensure your preferred canonical is selected, you must ensure that every internal link across your site points to that exact version. 4. Sitemap Inclusion and Organization Sitemaps are essentially a roadmap of your website that you provide to search engines via Google Search Console. Google uses the URLs listed in your XML sitemap as a major hint for canonicalization. The general rule of thumb is that only canonical URLs should be included in your sitemap. If you include non-canonical URLs (such as those with tracking parameters or duplicate versions of a landing page), you confuse the indexing process. Google expects the sitemap to be a clean list of the “master” pages. If a URL is in the sitemap but a different version of the page has a rel=”canonical” tag, Google has to weigh these conflicting hints against each other. 5. Security Protocols: HTTPS vs. HTTP In the modern web, security is a priority. Google has a documented preference for HTTPS over HTTP. If your website is available on both protocols, Google will almost always default to the HTTPS version as the canonical URL, even if you haven’t explicitly set a canonical tag. This scenario highlights Google’s intent to provide the safest experience for users. However, if your SSL certificate is invalid or there are mixed content issues, Google might revert to the HTTP version. It is best practice to force HTTPS sitewide and ensure that all canonical tags and internal links reflect the secure protocol. 6. URL Structure and Cleanliness Google’s algorithms are designed to prefer “clean” URLs over those cluttered with parameters, session IDs, or tracking codes. If a page can be accessed via example.com/product and example.com/product?utm_source=twitter, Google will naturally lean toward the shorter, cleaner version as the canonical. John Mueller has often mentioned that shorter URLs are generally preferred for indexing because they are more stable and user-friendly. While parameters are often necessary for marketing and tracking, they should be handled via the URL Parameter Tool in Search Console

Uncategorized

Microsoft makes it easier to import Google PMax campaigns

The Evolution of Cross-Platform Campaign Management In the rapidly shifting landscape of digital advertising, automation has moved from being a luxury to a fundamental necessity. Microsoft Advertising has been steadily closing the gap with its primary competitor, Google Ads, by refining its own version of Performance Max (PMax). To further incentivize advertisers to diversify their ad spend, Microsoft has introduced a series of robust updates designed to streamline the transition from Google to the Microsoft Advertising ecosystem. The most significant of these updates is the improved ability to import Google PMax campaigns, specifically those utilizing New Customer Acquisition (NCA) goals. For years, the friction of rebuilding complex, data-driven campaigns from scratch acted as a barrier to entry for many brands looking to expand their reach to the Bing and Yahoo networks. Microsoft’s latest move acknowledges this reality, offering a more “plug-and-play” experience for performance marketers who want to capitalize on Microsoft’s unique audience without the administrative headache of manual recreation. Simplifying New Customer Acquisition (NCA) Goal Imports Performance Max campaigns are unique because they leverage machine learning to optimize for specific conversion outcomes across all of an ad network’s available inventory. One of the most powerful features within this framework is the New Customer Acquisition (NCA) goal. This setting allows advertisers to bid more aggressively for users who have never purchased from them before, or to restrict bidding exclusively to new customers. Microsoft Advertising launched its own NCA features earlier this year, but the process of syncing these goals from Google Ads was not always seamless. With the latest update, which is now live for all advertisers, Microsoft has refined the import logic. When a marketer imports a Google PMax campaign that utilizes NCA goals, Microsoft will now automatically carry those goals over if they do not already exist in the user’s Microsoft account. This ensures that the strategic intent of the original campaign remains intact during the migration. Crucially, Microsoft has implemented safeguards to prevent accidental data loss or configuration errors. If an advertiser already has existing NCA settings within their Microsoft account, the import process will not overwrite them. This allows for a layered approach where global settings are preserved while specific campaign structures are updated. Handling Audience Lists and Remarketing Segments A significant challenge in cross-platform imports involves how different networks define and categorize audiences. Microsoft has introduced a sophisticated mapping system to ensure that Google’s audience segments translate accurately to Microsoft’s infrastructure. This mapping includes several key logic points: Website Visitor Segments: Google’s website visitor segments are automatically converted into Microsoft remarketing lists, allowing for consistent retargeting strategies across both search engines. Standard Lists: Broad segments such as “All Visitors” and “All Converters” from Google are mapped directly to their equivalent counterparts in Microsoft Advertising. Unsupported Lists: For segments that do not have a direct one-to-one equivalent—such as certain types of Google Customer Match lists—Microsoft will prompt advertisers to utilize fallback options, ensuring that the campaign does not launch “blind” without any audience data. This automated mapping reduces the risk of reaching the wrong audience and minimizes the time marketers spend auditing imported lists for accuracy. A Conservative Approach to Customer Classification One of the most noteworthy technical details of this update is how Microsoft handles “unknown” customers. In the world of privacy-first browsing and cookie deprecation, it is not always possible for an advertising platform to definitively know if a user is a new or returning customer. Attribution gaps are a common frustration for PPC specialists. Microsoft has decided to take a conservative stance on this issue. When a user’s status is unknown, Microsoft will classify them as an existing customer rather than a new one. While this may seem counterintuitive for a campaign seeking new blood, it is a strategic move designed to prevent the overcounting of new customer conversions. By defaulting to the “existing” category, Microsoft ensures that the Return on Ad Spend (ROAS) and CPA (Cost Per Acquisition) metrics for new customers are not artificially inflated, providing advertisers with a more honest and reliable dataset for their reports. Enhanced Transparency: Landing Page and Search Term Reporting Historically, one of the primary criticisms of Performance Max—on both Google and Microsoft—has been the “black box” nature of its reporting. Advertisers often felt they were surrendering too much control to the algorithm without seeing exactly where their money was going. Microsoft is addressing these concerns by introducing enhanced visibility for PMax campaigns. Final URL (Landing Page) Reporting Advertisers can now access detailed reporting for their landing pages (Final URLs) within PMax campaigns. This feature allows marketers to see critical performance indicators, including: Total Spend and Clicks per URL. Total Impressions. Conversion Value and ROAS. By being able to segment this data by campaign and asset group, advertisers can identify which specific pages on their site are resonating with the PMax audience. This is particularly valuable for e-commerce brands with thousands of product pages, as it helps them understand which landing pages require further optimization or higher budget allocation. Search Term Visibility and Future Updates In addition to landing page data, Microsoft is making search term reporting more visible by default. Transparency into what users are actually typing into the search bar before clicking an ad is essential for negative keyword management and creative refinement. Microsoft has also teased further transparency updates scheduled for the near future, including auction insights and additional publisher URL metrics. These tools will provide a clearer picture of the competitive landscape and where ads are appearing across the Microsoft Search Network and the Microsoft Audience Network. Administrative and Workflow Enhancements Beyond the headline PMax import features, Microsoft has rolled out several quality-of-life updates that cater to large-scale advertisers and agencies managing complex account structures. Seasonality Adjustments for Portfolio Bid Strategies Seasonality adjustments are a vital tool for managing short-term events, such as flash sales or holiday promotions, where conversion rates are expected to spike significantly for a brief period. Microsoft has expanded the support for these adjustments to include portfolio

Uncategorized

ChatGPT citations reward ranking and precision over length: Study

The New Frontier of Generative Engine Optimization The landscape of search engine optimization is undergoing its most significant transformation since the advent of mobile-first indexing. As OpenAI’s ChatGPT continues to evolve from a simple chatbot into a sophisticated research tool, the focus for digital marketers has shifted toward “Generative Engine Optimization” (GEO). For years, the goal was simply to appear on the first page of Google. Now, the goal is to be cited as a primary source by the world’s leading AI. A comprehensive study by AirOps, which analyzed 16,851 unique queries and over 50,000 generated responses, has shed light on exactly what it takes to earn a citation in ChatGPT. The findings challenge many long-held SEO beliefs, particularly the notion that “longer is always better.” Instead, the study reveals that ChatGPT prioritizes retrieval rank, heading precision, and content focus over the sheer volume of information. The Power of Retrieval: Why Traditional SEO Still Matters One of the most striking revelations of the AirOps study is that traditional search engine rankings remain the single most important factor for earning an AI citation. ChatGPT does not exist in a vacuum; it uses a process called Retrieval-Augmented Generation (RAG) to browse the live web, find relevant information, and synthesize an answer. If your content is not visible to the retrieval mechanism, it will never be cited. According to the data, the page in the top search position was cited 58.4% of the time. This percentage drops significantly as you move down the search results. A page in position 10 has only a 14.2% chance of being cited. This suggests that while ChatGPT is “intelligent,” its initial selection of sources is heavily dependent on existing search engine algorithms. To win the AI citation game, you must first win the traditional SEO game. The Retrieval Gap The drop-off from position one to position ten highlights a “retrieval gap.” ChatGPT tends to favor the most authoritative and highly-ranked sources provided by its underlying search engine (primarily Bing). For brands, this means that the core pillars of SEO—backlinks, technical performance, and domain authority—are still the foundation upon which AI visibility is built. You cannot optimize for ChatGPT if you haven’t first optimized for the search engines that feed it. Precision Over Breadth: The Decline of the “Ultimate Guide” For the last decade, the “Ultimate Guide” has been the gold standard of content marketing. SEOs believed that by creating a 10,000-word skyscraper article that covered every possible facet of a topic, they could capture more keywords and provide more value. However, the AirOps study suggests that for ChatGPT, this approach may actually be counterproductive. The data shows that focused pages—those that answer a specific query narrowly and directly—consistently outperformed broader, more comprehensive guides. When a user asks a specific question, ChatGPT looks for the most direct answer. A page that meanders through twenty different sub-topics before reaching the core answer creates “noise” that can interfere with the AI’s ability to extract the relevant data. The Danger of Keyword Dilution When a page attempts to be everything to everyone, its topical relevance becomes diluted. ChatGPT’s citation mechanism rewards precision. If a page is laser-focused on a single intent, it is much easier for the AI to verify that the content is a perfect match for the user’s request. This shift marks a move away from “comprehensive content” toward “specific content.” Heading Relevance: The Strongest On-Page Signal While retrieval rank is the strongest external signal, heading relevance is the most critical on-page factor identified in the study. Pages that used headings that closely mirrored the user’s query were cited 41.0% of the time. In contrast, pages with weaker or more creative heading matches saw citation rates hover around 30%. This suggests that ChatGPT’s “browsing” behavior relies heavily on the document’s structure to navigate and understand its contents. If a user asks “How to calibrate a gaming monitor,” a page with an H2 titled “How to Calibrate a Gaming Monitor” is far more likely to be cited than a page that uses a more stylistic heading like “Getting the Most Out of Your Display’s Colors.” Best Practices for AI-Ready Headings To maximize your chances of being cited, your subheadings should be functional and descriptive rather than clever or evocative. They should serve as clear signposts for the AI. Use natural language that reflects the way users phrase questions. If you can anticipate the specific questions a user might ask, and use those questions as your H2 or H3 tags, you significantly increase your “citation-readiness.” The Goldilocks Zone of Content Length One of the most surprising findings of the AirOps report is the impact of word count on citations. There is a “Goldilocks zone” for content length: not too short, but certainly not too long. Pages between 500 and 2,000 words performed best in terms of earning citations. Surprisingly, pages longer than 5,000 words were cited less often than pages with fewer than 500 words. This confirms that ChatGPT values efficiency. Long-form content often contains “fluff” or tangential information that increases the token count for the AI without adding proportional value. In the world of RAG, more tokens often mean more processing and a higher likelihood of the AI missing the most relevant nugget of information buried deep in the text. Why 5,000+ Words Can Be a Liability When ChatGPT crawls a page, it has a “context window”—a limit to how much information it can process at once. Very long pages may be truncated or summarized in a way that loses the specific details needed to answer a query. Furthermore, longer pages are more likely to cover multiple topics, which, as established, reduces the precision that ChatGPT rewards. If you have a topic that requires 5,000 words, it may be more effective to break it into three or four separate, highly-focused articles linked together, rather than one massive guide. The Timing of Freshness: The 30 to 90-Day Window Content freshness has always been a ranking factor for Google, but its

Uncategorized

Google AI Mode in Chrome now lets you search deeper with fewer tabs

The Evolution of Search: Why Tab Fatigue is Becoming a Thing of the Past For decades, the ritual of online research has remained largely unchanged. You type a query into a search engine, get a list of results, and then middle-click your way through a dozen different tabs to find the specific information you need. This process often leads to “tab fatigue,” a state where your browser is cluttered with indistinguishable favicons, and your computer’s RAM is struggling to keep up. Google is now addressing this friction head-on by integrating AI Mode more deeply into the Chrome browser architecture. The latest updates to Google Chrome’s AI Mode are designed to streamline the research process by reducing the need to jump between windows and tabs. By bringing contextual awareness and side-by-side viewing capabilities directly into the browsing experience, Google is transforming Chrome from a simple window to the web into a proactive research assistant. This shift signifies a broader trend in the tech industry: the move from “search and find” to “search and synthesize.” The Power of Side-by-Side Search in Chrome One of the most significant hurdles in modern browsing is the loss of context. When you find a promising link in an AI-generated response, clicking it usually takes you away from your conversation with the AI. You then have to navigate back and forth to ask follow-up questions or clarify details. Chrome’s new side-by-side search feature eliminates this back-and-forth entirely. When using AI Mode on the desktop version of Chrome, clicking a link within the AI’s response now opens the webpage in a panel immediately adjacent to the AI interface. This layout allows users to view the source material while simultaneously maintaining their chat history. Whether you are comparing technical specifications for a new laptop or verifying facts for an academic paper, having the source and the assistant visible at the same time ensures that the context of the search is never lost. This layout is particularly beneficial for complex queries. For instance, if you are using AI Mode to find a recipe, you can click a blog post to see the full instructions in the side panel while asking the AI for substitution suggestions or unit conversions in the main window. It creates a seamless workflow where the browser adapts to the user’s research needs rather than forcing the user to adapt to the browser’s limitations. Search Across Your Tabs: Integrating Contextual Awareness Perhaps the most technically impressive update is the ability to “search across your tabs.” Historically, an AI assistant only knew what you told it in a specific chat session. It had no “awareness” of the other information you might have open in different windows. Google is breaking down these silos by allowing Chrome users to bring data from their active tabs into AI Mode. By tapping the new “plus” menu on the New Tab page or within the AI Mode interface, users can now select recent or active tabs to include as context for their search. This allows for a level of personalization and relevance that was previously impossible. Imagine you are planning a vacation and have three different hotel tabs open, a flight itinerary in another, and a list of local attractions in a fifth. Instead of manually copying and pasting details into a prompt, you can simply “add” those tabs to your search. Once these tabs are integrated, AI Mode can deliver highly tailored responses. You could ask, “Based on the hotels I have open, which one is closest to the museum in my other tab?” or “Create a three-day itinerary using the locations I’m currently looking at.” This feature effectively turns your open tabs into a temporary, personalized knowledge base for the AI to draw from, significantly reducing the manual labor involved in cross-referencing information. Multi-Input Capabilities: Beyond Text-Based Queries The modern web is composed of much more than just HTML text. It includes images, complex data tables, and PDF documents. To reflect this, Google has expanded AI Mode to support multi-input queries. Users can now mix and match various media types—including images and files—to provide the AI with the fullest possible context. The integration of PDF support is a game-changer for professionals and students alike. Rather than spending hours skimming a 50-page whitepaper or a technical manual, a user can upload the PDF directly into Chrome’s AI Mode and ask for a summary, specific data points, or a comparison with another document. Because this happens within the browser, it removes the need for third-party PDF editors or external AI tools, keeping the workflow centralized and secure. Furthermore, image-based searching is now more intuitive. By bringing images into the AI Mode context, users can ask questions about visual data. This might include identifying a part in a technical diagram or asking for the nutritional information based on a photo of a food label. By combining these inputs with the “search across tabs” feature, Google is creating a multi-modal search engine that understands the web the same way humans do: as a collection of interconnected text, visuals, and documents. Direct Access to Creative Tools: Canvas and Image Generation Google is not just positioning AI Mode as a tool for consumption; it is also a tool for creation. The new updates provide easier access to integrated tools like Canvas and image generation. These features are now accessible wherever the new “plus” menu appears in Chrome, making it easier to transition from research to production. The Canvas tool is particularly noteworthy for developers and writers. It provides a dedicated space within the browser for writing long-form content or coding, with the AI acting as a co-pilot. If you are using AI Mode to research a specific programming library, you can jump straight into Canvas to test out snippets of code that the AI generates, all without leaving the Chrome environment. Similarly, the image generation feature allows users to create visual assets on the fly, which can be useful for presentations, social media posts, or simply

Uncategorized

New Google Spam Policy Targets Back Button Hijacking via @sejournal, @MattGSouthern

Understanding Google’s Latest Crackdown on Manipulative Web Practices Google has officially updated its search quality and spam policies to include a specific focus on a long-standing user frustration: back button hijacking. This deceptive technique, often used by low-quality websites to trap visitors on a page or force unwanted redirects, has moved from being a simple nuisance to a direct violation of Google’s malicious practices policy. This update signifies a major shift in how Google evaluates site navigation and user autonomy, reinforcing the search engine’s commitment to a friction-free browsing experience. For years, users have encountered websites that refuse to let them return to their search results. You click the back button, but instead of returning to Google, the page simply refreshes, stays put, or redirects you to a completely different advertisement or affiliate site. By categorizing this behavior as a malicious practice, Google is sending a clear signal to webmasters: user control is non-negotiable. Websites that persist in using these tactics risk severe ranking penalties or complete removal from the search index. What Exactly is Back Button Hijacking? Back button hijacking, also known as history manipulation, occurs when a website uses scripts to interfere with a browser’s back button functionality. Under normal circumstances, the back button should take the user to the previous URL in their browsing history. However, hijacking disrupts this logical flow. There are several ways this is technically achieved, but the most common method involves the HTML5 History API. The History API allows developers to modify a user’s browser history without triggering a full page reload. While this is incredibly useful for Single Page Applications (SPAs) and modern web design to ensure smooth transitions, it can be easily weaponized. Hijackers use history.pushState() or history.replaceState() to insert multiple “fake” entries into the browser’s history stack the moment a user lands on a page. Consequently, when the user tries to go back, they are simply navigating through these artificial entries, keeping them on the same domain or cycling them through a loop of redirects. In other instances, sites might use “meta refresh” tags or complex JavaScript redirects that trigger specifically when the browser detects a back-navigation event. The result is always the same: the user is prevented from leaving the site, which creates an experience that is not only annoying but fundamentally deceptive. The Official Policy Update and Enforcement Timeline Google has integrated back button hijacking into its existing list of malicious practices under the broader Spam Policies. This alignment means that Google views history manipulation with the same level of severity as phishing, malware distribution, and deceptive software downloads. This is a significant escalation from simply considering it a “bad user experience” metric. The timeline for enforcement is critical for webmasters and SEO professionals to note. Google has announced that full enforcement of this policy will begin on June 15, 2026. While this date may seem distant, it provides a necessary window for complex sites to audit their codebase. Google has specified that sites have a two-month grace period from the initial announcement to identify and remove any offending code before the manual and algorithmic enforcement mechanisms are fully deployed. By providing a clear deadline, Google is allowing site owners to conduct thorough audits. Many legitimate sites may inadvertently trigger these flags due to poorly implemented third-party scripts, advertising widgets, or legacy code. The long lead time suggests that Google expects widespread compliance and will likely be uncompromising once the June 2026 deadline arrives. Why Google is Targeting This Practice Now Google’s primary product is its search engine, and its value is derived from the quality of the journey it provides to users. If a user clicks a result in Google and finds themselves “trapped” on a site, the user’s trust in Google’s recommendations diminishes. The “back to search” journey is a fundamental part of how people use the internet—it is the safety net that allows users to explore different sources. The rise of mobile browsing has made back button hijacking even more problematic. On mobile devices, where screen space is limited and navigation is often gesture-based, being unable to return to a previous screen is a significant accessibility hurdle. Mobile users are more likely to abandon a search entirely if they encounter a site that hijacks their navigation, leading to a degraded mobile web ecosystem. Furthermore, this practice is frequently associated with “made for advertising” (MFA) sites and low-quality affiliate hubs. These sites use hijacking to inflate their session duration and pageview metrics, artificially boosting their perceived value to advertisers. By cutting off this tactic, Google is effectively targeting the economic incentives behind low-quality web content. How Back Button Hijacking Impacts SEO The inclusion of history manipulation in the spam policy means the consequences for SEO are direct and potentially devastating. Unlike “soft” ranking factors like page speed or keyword density, spam policy violations often lead to manual actions. A manual action is a penalty issued by a human reviewer at Google, which can result in a site being demoted or completely delisted from search results. Beyond manual actions, Google’s algorithms are increasingly capable of detecting patterns of deceptive navigation. If the algorithm identifies that a significant portion of users are unable to return to the SERP (Search Engine Results Page) via the back button, it may categorize the site as “unhelpful.” This fits into the broader “Helpful Content” framework that Google has been refining for years. A site that prevents users from leaving is, by definition, not being helpful. Additionally, back button hijacking negatively affects user signals. While Google has traditionally been vague about the direct impact of “pogo-sticking” (users jumping back and forth between the SERP and results), there is no doubt that high abandonment rates and forced engagement do not contribute to a healthy SEO profile. When a user finally manages to escape a hijacked site, they are unlikely to return, leading to a long-term decay in brand authority and organic click-through rates. Identifying Back Button Hijacking on Your Site Not all back

Uncategorized

Gemini blocked more than 99% of bad ads before they ran in 2025

The Evolution of Digital Advertising Security: A New Era Under Gemini The digital advertising landscape has long been a battleground between legitimate businesses looking to reach customers and malicious actors seeking to exploit the system. For years, Google has relied on a combination of automated filters and human review to maintain the integrity of its massive ad network. However, as bad actors have become more sophisticated, utilizing generative AI to create convincing scams at scale, the defense mechanisms had to evolve. Enter Gemini, Google’s most capable multimodal AI model, which has fundamentally transformed how the company polices its ecosystem. According to the 2025 Ads Safety Report, Google is now leaning more heavily than ever on Gemini to secure its platforms. The results are staggering: Google blocked or removed more than 8.3 billion ads globally last year and suspended nearly 25 million advertiser accounts. Most importantly, the report highlights that Gemini successfully blocked more than 99% of these policy-violating ads before they ever had the chance to reach a user. This proactive approach marks a significant shift from reactive moderation to predictive prevention. The Core Metrics of Google’s 2025 Ads Safety Report The scale of Google’s enforcement actions in 2025 provides a clear picture of the ongoing “AI arms race” in ad safety. The sheer volume of data processed by Gemini is unprecedented. Below are the key figures that define the company’s efforts over the past year: 8.3 Billion: The total number of ads blocked or removed globally. 24.9 Million: The number of advertiser accounts suspended for serious or repeated violations. 602 Million: Scam-related ads specifically identified and removed. 4 Million: Accounts linked directly to scam operations that were permanently shuttered. 4.8 Billion: Ads that were restricted based on regional laws or specific industry regulations. 480 Million: Individual web pages that were blocked or restricted from hosting Google ads. 245,000+: Publisher sites that faced enforcement actions for policy violations. These numbers represent a massive logistical challenge that would be impossible to manage through human oversight alone. By integrating Gemini into the core of its safety infrastructure, Google has been able to process information at a speed and depth that previous systems could not match. How Gemini Is Redefining Ad Enforcement The transition to Gemini-based enforcement represents a departure from traditional, keyword-based detection systems. In the past, bad ads were often caught because they contained specific “trigger” words or patterns associated with scams. However, sophisticated scammers quickly learned how to bypass these filters by using synonyms or deceptive formatting. Gemini changes this dynamic by shifting the focus from keywords to intent and context. Google has stated that Gemini can analyze hundreds of billions of signals simultaneously. These signals include not just the text of the ad itself, but the age of the advertiser’s account, their historical behavior patterns, the landing page content, and the specific campaign activity. By looking at the “big picture,” Gemini can identify malicious intent even when the ad itself appears harmless on the surface. This ability to understand nuance is why Google was able to stop 99% of bad ads before they launched. A Massive Leap in User Report Processing Another area where Gemini has made a significant impact is in the processing of user feedback. When a user flags an ad as a scam or inappropriate, that report must be verified before action is taken. In 2025, Gemini allowed Google to process four times more user reports than in the previous year. This rapid response time is critical in shutting down “flash” scams—malicious campaigns that run for a very short period to avoid detection while still reaching thousands of victims. Reducing False Positives for Legitimate Businesses One of the biggest pain points for legitimate advertisers has always been the “false positive”—when a perfectly valid ad is flagged or an account is suspended due to an automated error. These disruptions can be devastating for small businesses that rely on consistent ad traffic for their revenue. Google reports that Gemini has significantly improved the accuracy of its enforcement, cutting incorrect advertiser suspensions by 80%. This improvement is largely due to Gemini’s advanced reasoning capabilities. By better understanding the context of an ad, the AI can distinguish between a legitimate financial service and a predatory loan scam, or between a health supplement and a dangerous unregulated drug. This nuance ensures that while the “bad guys” are kept out, legitimate brands experience fewer disruptions. The Geographic Focus: Enforcement in the United States While ad safety is a global concern, the United States remains a primary target for sophisticated scam operations. In 2025, Google removed 1.7 billion ads and suspended 3.3 million advertiser accounts within the U.S. alone. The data reveals the specific areas where policy violations are most frequent, providing insight into the types of content Gemini is most often flagging. Top Policy Violations in the U.S. The 2025 report identifies five major categories of violations that led to the majority of enforcement actions in the American market: Abusing the Ad Network: This includes techniques like “cloaking,” where an advertiser shows one version of a landing page to Google’s reviewers and a completely different (often malicious) version to users. Misrepresentation: This category covers ads that make false claims or use deceptive tactics to trick users into providing personal information or making a purchase. This often includes “deepfake” celebrity endorsements or fake news layouts. Sexual Content: Google maintains strict policies regarding adult content to ensure that ads remain suitable for a general audience. Personalization Violations: This involves advertisers attempting to target users based on sensitive categories, such as health conditions or financial status, in ways that violate Google’s privacy policies. Dating and Companionship: While not inherently prohibited, this sector is highly regulated to prevent human trafficking and fraud, leading to a high volume of restricted or blocked ads. By identifying these trends, Google can further train Gemini to recognize the specific tactics used within these high-risk categories, creating a more robust defense for U.S. consumers. The Double-Edged Sword: When Automation Goes

Uncategorized

Why your website is now the source of truth in local AI search

Open ChatGPT, Claude, or Google Gemini and search for a local business you know has a strong, established online presence. Ask the AI for a specific recommendation in that category—perhaps a law firm, a specialized plumber, or a boutique marketing agency. In many cases, the business will appear in the response. If you dig deeper and look at the citations or sources the AI provides, you will almost certainly see the business’s own website listed as a primary reference. This reveals a fundamental shift in the digital landscape: AI does not conjure answers out of thin air. Large Language Models (LLMs) and AI search engines are not creative engines in the sense of inventing facts; they are retrieval engines. They pull from the most credible, accessible, and comprehensive information they can find. If your website is not the most complete and authoritative source of information about your own business, the AI will be forced to assemble a narrative from digital scraps—third-party directories, outdated reviews, or even competitor mentions. When that happens, you lose control of your brand story. Many business owners and digital marketers are currently asking the same existential question: “Do I even need a website anymore? If AI answers every query directly in the search results, why does my own domain matter?” The answer is that your website has evolved. It is no longer just a digital brochure or a lead-generation tool; it is now a source document. AI systems treat it as the authoritative input for their knowledge graphs. The real question is no longer whether you need a website, but who gets to define your business: you or a fragmented collection of third-party sources. Zero-click doesn’t mean zero opportunity The rise of “zero-click” searches—where a user gets an answer directly on the search engine results page (SERP) without clicking through to a website—has many marketers feeling uneasy. They see impressions holding steady while click-through rates (CTR) dip, leading to the premature conclusion that websites are becoming obsolete. However, this is a misunderstanding of how search intent works in the age of AI. Fewer clicks do not equate to less importance. Instead, the nature of the click has changed. When we look at the data regarding where AI Overviews (AIOs) actually appear, a clear pattern emerges. Analysis of Ahrefs data covering over 46 million keywords shows that nearly 99% of keywords triggering an AI Overview are informational in nature. Navigational keywords, where a user is looking for a specific site, account for a mere 0.13%. What does this mean for your business? It means the traffic you are “losing” to AI was likely never high-intent, revenue-driving traffic to begin with. If someone wants a quick fact—like “what is the average cost of a roof repair”—they get it from the AI and move on. These were “top of the funnel” visits that rarely resulted in immediate conversions. However, commercial and transactional keywords only make up 12.5% and 3.5% of AI Overview triggers, respectively. (Note that these totals overlap as a single keyword can have multiple intents). The clicks that drive your bottom line—the ones tied to phone calls, service bookings, and consultations—still happen. These high-value queries occur further down the funnel after an AI has already made a recommendation. When a customer is ready to pull the trigger, they don’t just trust the AI blindly; they navigate to the website to validate the recommendation. Your website is the destination for the “validation phase.” AI recommends, your customer decides: Know the difference Imagine a homeowner asking an AI assistant, “Who is the most reliable emergency plumber in downtown Chicago?” The AI will likely surface three or four names. It does this by pattern-matching based on location signals, review sentiment, and the content it has indexed from various websites. At this stage, the AI is offering a starting point, not a final verdict. The AI is not the one signing the contract or handing over credit card information. For high-stakes local decisions—choosing a pediatrician, a criminal defense attorney, or a high-end contractor—consumers are not going to act solely on an algorithmic suggestion. The “human element” of decision-making requires a level of trust that an AI summary cannot provide on its own. After the AI provides its recommendation, the customer’s journey typically follows a predictable path: They search for the specific business name to find the official site. They read the most recent reviews to check for consistency. They look at photos of past work or the team to establish a visual connection. They visit the website to confirm the business offers the exact service they need at a price point they find acceptable. This validation phase is where the deal is closed. AI might get you a seat at the table, but your website is what wins the contract. AI is actually making your website more valuable It is a paradox of the modern web: the more AI dominates the search experience, the more valuable your original content becomes. AI systems are constantly “reading” your website to determine exactly what you do, who you serve, and why you are better than the competition. They are cross-referencing your site content with your Google Business Profile, local directory listings, and social media mentions to ensure your business is legitimate and consistent. When your website provides a clear, structured, and consistent narrative, the AI gains “confidence” in your business. High confidence leads to higher placement in AI-generated recommendations. Conversely, when your website is thin on details or contradicts your other listings, the AI’s confidence drops, and you get skipped in favor of a competitor with a clearer digital footprint. Your website is now effectively a source document for LLMs. If you don’t provide the data, the AI will fill in the blanks using whatever it can find elsewhere—perhaps a disgruntled Yelp review from five years ago or an outdated directory that lists your old office address. By maintaining a robust website, you ensure the AI pulls from the most accurate and flattering

Scroll to Top