Uncategorized

Uncategorized

Why better signals drive paid search performance

In the modern landscape of digital advertising, the role of the PPC manager has undergone a seismic shift. We have moved away from the era of manual bid adjustments and granular keyword obsession, entering a period dominated by automation and machine learning. In this increasingly automated environment, paid search performance is constrained by a simple, inescapable reality: algorithms can only optimize toward the signals they are given. Consequently, improving those signals remains the most reliable way to improve results in a competitive market. While the concept of “better signals” sounds straightforward, its execution is where most advertisers struggle. Many accounts are still optimizing around vanity metrics or surface-level signals that do not reflect actual business outcomes. To succeed today, you must stop viewing the algorithm as a magic wand and start viewing it as a high-powered engine that requires high-octane fuel to run correctly. This fuel is your data. In this comprehensive guide, we will explore the inner workings of bidding algorithms, the specific signals you can influence, and the strategic framework required to align your data with real-world business growth. How bidding algorithms actually work Modern bidding systems, such as Google’s Smart Bidding or Microsoft Advertising’s automated solutions, are frequently described as “black boxes.” This terminology suggests that the systems operate mysteriously or according to whims that advertisers cannot understand. However, viewing these systems as a “black box” is counterproductive. To master paid search, you must understand the mechanics of the engine. At a high level, bidding algorithms are large-scale pattern recognition systems. They don’t “think” in the human sense; they calculate probabilities based on historical data and real-time context. Early iterations of automated bidding were relatively primitive, utilizing simple statistical methods, rules-based logic, and regression models. These systems were often reactive, looking at past performance to make future guesses. Over time, these evolved into more advanced machine learning approaches using decision trees and ensemble models. Today, these have become large-scale learning systems capable of processing thousands of contextual and historical inputs simultaneously. This is known as “auction-time bidding,” where the system evaluates the unique profile of every single search query in milliseconds. Today’s systems evaluate a massive array of signals, including: Query Intent: The specific phrasing and nuances of what the user is searching for. Device and Location: Where the user is and what hardware they are using. Time of Day: Historical conversion patterns related to specific hours or days of the week. User Behavior: Previous interactions with your website or similar brands. Competitive Dynamics: Who else is in the auction and what their historical behavior suggests. Despite this incredible complexity, the underlying mechanisms have stayed remarkably consistent. Bidding algorithms identify patterns tied to a desired outcome, estimate that outcome’s probability and expected value for each specific auction, and adjust the bid accordingly. They do not understand your business strategy, your quarterly goals, or your brand’s mission. They only infer success from the feedback loop you provide. When that feedback loop is weak, noisy, or misaligned with real business value, even the most advanced algorithms will efficiently optimize toward the wrong objective. Better technology does not compensate for poor inputs. The signals advertisers can influence While it is true that many signals used by Google and Microsoft are “inferred” and sit outside of an advertiser’s direct control, it is a mistake to think we are powerless. There is a meaningful set of levers that you control which directly shape how the algorithm learns. These inputs define the environment in which the “black box” operates. To influence performance, you must optimize the following areas: Account and campaign structure The way you group your data determines how much information the algorithm has to work with. If your structure is too fragmented, the algorithm suffers from “data sparsity,” meaning it doesn’t have enough conversions in a single bucket to find a pattern. Conversely, if it is too consolidated, you might be mixing audiences with vastly different behaviors, confusing the system. Bidding strategy selection Choosing between Target CPA (tCPA), Target ROAS (tROAS), or Maximize Conversions is essentially telling the machine which mathematical formula to prioritize. A mismatch here—such as using tCPA for a high-ticket item with a long sales cycle—can lead to stagnant performance. Budget allocation and risk management Budgets act as the boundaries of the algorithm’s “playground.” If a budget is too restrictive, the algorithm cannot “explore” new auctions to find cheaper conversions. Effective budget management involves balancing scaling with the risk of diminishing returns. Targeting and exclusions While automation handles much of the heavy lifting, exclusions (negative keywords, placement exclusions, audience exclusions) are vital. They act as the “guardrails,” preventing the machine from wasting spend on irrelevant traffic that might look good on paper but never converts. Ad creative and asset quality Creative is now a primary targeting signal. In modern systems, the language used in your headlines and descriptions helps the AI understand who your audience is. High-quality assets lead to better engagement, which in turn provides the algorithm with more positive data points to learn from. Landing page experience The algorithm doesn’t stop looking at the click. It monitors what happens next. A poor landing page experience leads to high bounce rates and low conversion rates, signaling to the algorithm that the traffic it sent was not valuable. This creates a downward spiral of lower bids and reduced visibility. Conversion data: The most important signal When paid search performance plateaus, the first instinct of many marketers is to blame the campaign structure or the creative. While those are important, the biggest lever available usually sits elsewhere: conversion data. In most modern accounts, conversion data is the single most influential signal you control. The conversion is the “North Star” for the bidding algorithm. It defines the successful outcome the system is trained to pursue. It directly informs prediction models, bid calculations, and learning feedback loops. If your conversion setup is flawed, the entire machine is broken. Common issues with conversion data include: Noisy Signals: Tracking “page views” as

Uncategorized

The checks that make or break your next website migration

Website migrations are often regarded as the “open-heart surgery” of the digital world. They are complex, high-stakes operations where even a minor slip of the hand can lead to a catastrophic loss in organic traffic, search engine rankings, and brand authority. Despite the well-earned reputation for being risky, the reality is that most migration failures do not stem from complex technical glitches. Instead, they are usually the result of small, avoidable oversights during the planning and execution phases. A successful migration isn’t just about moving files from one server to another or changing a domain name; it is about preserving the “SEO equity” you have built over years. To navigate this process safely, you need a structured, staged approach that covers everything from the initial staging environment to the weeks following the go-live date. By implementing a rigorous set of checks, you can ensure your site stabilizes quickly rather than turning into a long-term recovery project. Before Launch: Catching Issues in the Staging Environment The staging site is your safety net. It is the controlled environment where you can break things, fix them, and refine your strategy without affecting live users or search engine crawlers. The goal of the pre-launch phase is to identify 100% of potential SEO issues before they ever touch the live web. If an error reaches the live site, the cost of fixing it—both in terms of developer time and lost revenue—increases exponentially. Keep the Staging Site Private and Non-Indexable One of the most frequent (and damaging) mistakes in website migration is leaving the staging site open to search engine crawlers. If Google discovers and indexes your staging environment, you run the risk of creating massive duplicate content issues. In some cases, Google may even decide that the staging site is the “authoritative” version, causing your live site’s rankings to fluctuate or vanish entirely. To prevent this, ensure your staging site is protected by a password (HTTP authentication) or IP whitelisting. Relying solely on a “disallow” command in the robots.txt file is often insufficient, as Google can still index URLs if they are linked from elsewhere. Furthermore, keeping the site private protects your user experience. There have been numerous cases where e-commerce customers have found a staging site through search, attempted to place orders on a non-functional checkout page, and ended up frustrated. This creates unnecessary pressure on customer service teams and damages your brand reputation before the new site even launches. Establish Comprehensive Benchmarks You cannot measure the success of a migration if you don’t know where you started. Before any changes are made, you must record a detailed baseline of your current performance. This isn’t just about checking your top-level traffic; you need a granular look at your site’s health. Key metrics to record include organic sessions by page, keyword rankings for your top 100 terms, conversion rates for key landing pages, and the total number of indexed pages in Google Search Console. Additionally, document your current site speed using Core Web Vitals. Having this “normal” state documented allows you to distinguish between expected short-term volatility and a genuine technical failure after the move. Identify and Protect Priority Pages Not all pages are created equal. In any migration, a small percentage of your pages likely drives the majority of your revenue, traffic, or backlink authority. These are your “Priority Pages.” During the mapping and testing process, these pages require manual verification. You must ensure their internal linking structure remains intact, their metadata is preserved, and their redirect paths are direct (no chains). Review Templates and Content Continuity Modern websites rely heavily on templates. While this makes scaling easy, it also means a single error in a template header or footer can break thousands of pages simultaneously. Before launching, perform a comprehensive audit of your new site templates to ensure the following elements are present and correct: Titles and Headings: Ensure H1 tags and Meta Titles have migrated correctly and aren’t being overwritten by default CMS settings. Canonical Tags: These must use absolute URLs (not relative paths) and must point to the new, live URLs. Referring to Google’s guide on canonical URLs is essential here to ensure you are consolidating duplicate signals correctly. Structured Data: Use schema markup validators to confirm that your Review, Product, or FAQ schema has transferred without errors. Media and Assets: Check that images, videos, and PDF downloads are still functional and hosted on the correct paths. Be Intentional About URL Changes The golden rule of website migrations is: If you don’t have to change a URL, don’t. Every time a URL changes, Google has to “re-learn” that page. Unnecessary changes made for the sake of design or CMS convenience introduce risk without adding value. Common pitfalls include inconsistent trailing slashes (e.g., /blog vs /blog/). Search engines view these as two distinct pages. If your old site used trailing slashes and your new one does not, you must implement site-wide redirects to prevent duplicate content and split link equity. Similarly, avoid changing folder structures or capitalization patterns unless it is strictly necessary for the long-term architecture of the site. Inconsistent URL handling often leads to slower crawling and delayed recovery of rankings. Map Redirects and Compile Legacy Data Redirect mapping is perhaps the single most critical technical task in a migration. You must compile a list of every URL on your current site by pulling data from multiple sources: your CMS, Google Search Console, your XML sitemaps, and backlink tools like Ahrefs or Semrush. This ensures that even old pages that still have external backlinks are accounted for. Every old URL needs a destination. If a page is being retired, redirect it to the most relevant equivalent page. Avoid the “lazy” approach of redirecting all old pages to the homepage; this provides a poor user experience and is often treated as a “soft 404” by Google, meaning you gain no SEO benefit from the redirect. For a structured framework, many experts point to Aleyda Solis’ guide to SEO for web

Uncategorized

SEO Test Shows It’s Trivial To Rank Misinformation On Google via @sejournal, @martinibuster

Understanding the Vulnerability of Modern Search Results In the rapidly evolving landscape of digital information, Google has long positioned itself as the ultimate arbiter of truth. Through its complex ranking algorithms and initiatives like E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness), the search giant aims to prioritize high-quality, factual content. However, a recent and startling SEO experiment has demonstrated that the system is far more fragile than many realized. The test confirms that ranking blatant misinformation is not only possible but, in many cases, trivial for those who understand the mechanics of search engine optimization. The implications of this experiment are profound, touching on everything from political discourse and public health to the future of AI-driven search. When a search engine becomes a megaphone for falsehoods, the entire ecosystem of digital trust begins to crumble. This deep dive explores how the experiment was conducted, why Google’s sophisticated algorithms were bypassed, and what this means for the future of the internet. The Mechanics of the Misinformation Experiment The core of the SEO test involved a relatively straightforward but clever methodology. Researchers and SEO experts, including those documented by Roger Montti, sought to determine if a completely fabricated “fact” could not only rank on the first page of Google but also be accepted by the algorithm as a definitive answer. By creating content around a non-existent event or a false historical detail, the testers eliminated the obstacle of competing with established, factual sources. In a typical search scenario, Google compares new information against a vast index of known facts. However, when a “new” fact is introduced—something that hasn’t been written about before—the algorithm lacks a baseline for verification. If the fabricated content is presented on a site with decent technical SEO, proper internal linking, and clear headings, Google’s crawlers often treat it as a “fresh” and “relevant” discovery rather than a potential lie. The experiment proved that the algorithm prioritizes structural signals—such as keyword placement, schema markup, and mobile responsiveness—over the literal truth of the text. Once the fake information was indexed, it didn’t just sit in the back pages of the search results; it climbed to the top, often appearing in featured snippets or as a primary answer for specific queries. Why Google’s Algorithms Fall for Fabricated Content It may seem surprising that a multi-billion dollar AI infrastructure can be fooled by a simple lie. To understand why this happens, we must look at how Google defines “quality.” Google does not have a “truth” sensor; instead, it uses a series of proxies to estimate the likelihood that a page is helpful. These proxies are where the system becomes vulnerable. The Problem with Freshness and Uniqueness Google’s “Query Deserves Freshness” (QDF) and its preference for unique content are two pillars of its ranking system. When an SEO professional creates a unique lie, they are providing the algorithm with something it hasn’t seen before. Since the algorithm is trained to value “original research” and “new insights,” it may inadvertently reward misinformation because there is no contradictory data to flag it as false. In the eyes of a bot, a unique lie can look more valuable than a repetitive truth. The Semantic Trap Modern search is semantic, meaning it tries to understand the intent and relationships between words rather than just matching keywords. If a piece of misinformation is written in a professional, authoritative tone and uses “entities” (names, dates, and locations) that Google recognizes, the algorithm perceives a high level of topical relevance. The lie is effectively “hidden” inside a shell of high-quality SEO writing, making it indistinguishable from a well-researched article to an automated crawler. Reliance on Structural Authority Search engines place significant weight on the technical health of a website. If a fabricated story is published on a domain with a clean history, fast loading speeds, and a secure HTTPS connection, it gains an immediate advantage. The algorithm assumes that a site which follows technical best practices is more likely to provide reliable content. This experiment highlights a dangerous gap: technical proficiency is not a guarantee of editorial integrity. The Ripple Effect: How Misinformation Spreads Beyond Search Perhaps the most concerning discovery from the SEO test was how quickly the misinformation spread to other platforms. The internet is no longer a collection of isolated websites; it is a giant, interconnected feedback loop. Once Google validates a piece of misinformation by ranking it highly, it sets off a chain reaction that is incredibly difficult to stop. The Role of Scraper Sites and Aggregators The web is populated by thousands of automated “scraper” sites that monitor high-ranking search results to generate their own content. When the fake fact appeared at the top of Google, these bots automatically copied the information, reworded it, and published it on their own domains. Within hours, a single lie can be mirrored across dozens of websites, creating a false sense of consensus. When Google sees the same “fact” appearing on multiple sites, its confidence in the accuracy of that fact actually increases, further cementing the misinformation’s rank. The AI Training Loop This experiment has dire consequences for Large Language Models (LLMs) like ChatGPT, Claude, and Google’s own Gemini. These AI models are trained on data scraped from the open web. If misinformation is allowed to rank and proliferate on Google, it inevitably ends up in the training sets for future AI. This leads to “model collapse” or “hallucination amplification,” where AI systems confidently state falsehoods because they encountered them multiple times during their training phase. AI Overviews and Featured Snippets Google’s AI Overviews (formerly SGE) aim to summarize search results for users. However, these overviews are only as good as the sources they cite. The SEO test showed that Google’s AI summary tools are just as susceptible to misinformation as the standard organic results. If a fabricated article ranks #1, the AI summary will often use that article as its primary source, presenting the lie as a definitive, Google-sanctioned answer. Most users never click past the summary, meaning

Uncategorized

EU signals imminent decision on Google DMA probe

EU signals imminent decision on Google DMA probe The regulatory landscape for global tech giants is shifting once again as the European Union prepares to deliver a potentially landmark ruling. After months of anticipation and mounting pressure from industry stakeholders, the EU’s top antitrust official has signaled that a decision regarding Google’s compliance with the Digital Markets Act (DMA) is imminent. While a specific date remains unconfirmed, the message from Brussels is clear: the period of observation is ending, and the era of enforcement is beginning. The Digital Markets Act was designed to curb the dominance of “gatekeeper” platforms and ensure a fair, competitive environment for smaller businesses and consumers. As Google maintains a commanding share of the search market in Europe—exceeding 90% in most member states—the outcome of this probe carries immense weight for the future of search engine optimization (SEO), digital advertising, and the burgeoning field of generative AI. The Stakes of the Impending Decision Teresa Ribera, the European Commission’s Competition Commissioner, recently addressed the status of the investigation. In comments made to Dow Jones Newswires, Ribera stated, “It will come,” referring to the final decision on the Google probe. She emphasized that the cases are inherently complex, requiring a meticulous review of evidence and a commitment to fair procedure. This careful approach, while legally necessary, has been a source of frustration for those who feel Google has been allowed to operate with an unfair advantage for too long. The investigation, which officially launched in March 2024, focuses on whether Google’s search results and app store practices unfairly favor its own services over those of competitors. This concept, known as “self-preferencing,” is a core violation under the DMA framework. If the Commission finds Google in breach of these regulations, the consequences could include massive fines—up to 10% of the company’s global annual turnover—and mandated structural changes to how Google displays information to hundreds of millions of European users. Why the Google Probe is Unique While the European Commission has already taken action against other tech titans like Meta and Apple under the DMA, the Google investigation has proven to be a more intricate puzzle. The search giant’s ecosystem is deeply integrated into the daily lives of both consumers and businesses, making any forced changes technically and economically significant. Meta has faced scrutiny over its “pay or consent” model, and Apple has been fined for its “steering” rules that prevented developers from informing users of cheaper alternatives outside the App Store. In contrast, Google’s probe touches upon the very architecture of the open web. The way Google ranks websites, displays shopping results, and now integrates AI-generated answers directly into search results (AI Overviews) is under the microscope. The Commission must balance the need for competition with the functional requirements of a high-quality search engine. Mounting Pressure from Advocacy Groups The delay in reaching a decision has not gone unnoticed. This month, a coalition of 18 lobby and civil society groups sent a formal letter to Commissioner Ribera, demanding swift and decisive action. The groups argue that the Commission’s credibility is at stake. They contend that every day the status quo remains, European businesses are being systematically disadvantaged by a search algorithm that they claim prioritizes Google’s own interests. The letter highlights a critical concern for the SEO community: if a gatekeeper can control the flow of traffic with impunity, the incentive for independent businesses to invest in high-quality web content diminishes. The advocates are calling for “clear remedies” that go beyond mere financial penalties. They want to see fundamental shifts in how Google presents search results, ensuring that vertical search services (such as travel, local business, and shopping engines) are given a fair chance to appear alongside Google’s own offerings. The AI Factor: AI Overviews and Content Rights Perhaps the most modern and controversial aspect of the EU’s scrutiny involves how Google utilizes data to power its AI Overviews. As Google moves toward an “answer engine” model rather than a “link engine” model, publishers are raising alarms about content theft and the loss of referral traffic. The European Commission is separately investigating how Google ranks news publishers and how it uses third-party content to train and display AI-generated summaries. Under the DMA, gatekeepers are prohibited from using the data of business users to compete against them. If Google’s AI Overviews are found to be scraping content from publishers to keep users on Google’s own pages—thereby depriving those publishers of ad revenue and visitor data—it could constitute a major violation of the DMA. For SEO professionals and content creators, this ruling could determine the viability of their business models. If the EU mandates that Google must provide more transparency or compensation for the use of publisher data in AI, it could set a global precedent for how the relationship between AI developers and content creators is governed. Ribera’s High-Stakes Meetings in the US The timing of Ribera’s announcement is no coincidence. The Competition Commissioner is currently on a high-profile tour of the United States, meeting with the leaders of the tech world’s most powerful companies. Her itinerary includes sessions with Alphabet’s Sundar Pichai, Meta’s Mark Zuckerberg, OpenAI’s Sam Altman, and Amazon’s Andy Jassy. These meetings suggest that the EU is looking for more than just compliance on a case-by-case basis; they are looking to shape the long-term behavior of these digital gatekeepers. Additionally, Ribera is scheduled for talks in Washington, D.C., with the acting head of the U.S. Justice Department’s antitrust division. This cross-Atlantic dialogue is crucial, as the DOJ is currently pursuing its own landmark antitrust case against Google in the United States, focusing on its search and ad tech dominance. Coordination between the EU and US regulators could create a unified front that makes it much harder for Google to maintain its current business practices. Potential Impact on the Digital Advertising Ecosystem For advertisers, a ruling against Google under the DMA could be transformative. The EU is looking closely at how Google’s ad tech stack operates and

Uncategorized

How AI-generated content performs in Google Search: A 16-month experiment

How AI-generated content performs in Google Search: A 16-month experiment The rise of Generative AI has fundamentally changed the landscape of content marketing and Search Engine Optimization (SEO). Today, a single person can generate hundreds of high-quality-looking articles in a matter of hours, a task that once took a team of writers months to accomplish. However, the ease of production has led to a critical question for digital publishers: does this content actually provide long-term value in the eyes of Google? To answer this, a comprehensive 16-month experiment was conducted in collaboration with the research team at SE Ranking. The goal was to move beyond anecdotal evidence and track the performance of raw, unedited AI content on brand-new domains with zero existing authority. The findings suggest that while AI can provide a quick burst of visibility, the road to long-term search success is far more complex than simply hitting “generate.” The Methodology: Setting the Stage for the Experiment The core objective of this study was to observe the natural lifecycle of AI-generated content without the interference of human optimization. Many SEO experts argue that AI content only works when heavily edited or paired with a strong backlink profile. This experiment stripped away those variables to see how the content performed on its own merit. The team purchased 20 brand-new domains, ensuring there was no previous search history, brand recognition, or existing backlink profile that could skew the results. Each domain was dedicated to a specific niche to provide a broad look at how Google handles different topics. The niches included: Arts & Entertainment Business & Services Community & Society Computers & Technology Ecommerce & Shopping Finance & Accounting Food & Drink Games & Accessories Health & Medicine Industry & Engineering Hobbies & Interests Home & Garden Jobs & Career Law & Government Lifestyle & Well-being Pets & Animals Science & Education Sports & Fitness Travel & Tourism Vehicles & Boats For each of these 20 niches, the researchers identified 100 informational “how-to” keywords. These were specifically chosen as long-tail terms with lower competition, which typically offer the easiest path to ranking for new websites. In total, 2,000 AI-generated articles were published across the network of sites. No human editing, rewriting, or enhancement was performed. Once published, the sites were added to Google Search Console, sitemaps were submitted, and the pages were left untouched to observe their organic performance over 16 months. Early Success: The Indexing and Visibility Phase The initial results were surprisingly positive, leading some to believe that “AI spam” might actually be a viable strategy. Within the first 36 days, Google showed a high willingness to crawl and index the new content. Approximately 71% of the 2,000 pages (1,419 articles) were indexed within just over a month. For brand-new domains with zero authority, this is a remarkably high success rate. During this first month, the collective network of sites generated 122,102 impressions and 244 clicks. More impressively, 80% of the sites were already ranking for at least 100 keywords. Some niches saw explosive early interest. The “Hobbies & Interests” domain led the pack with over 17,000 impressions, followed closely by “Business & Services” and “Travel & Tourism.” This early performance indicates that Google’s initial assessment of content is often based on relevance and basic SEO structure. Because the AI-generated content followed a logical “how-to” format and targeted low-competition keywords, Google initially gave it a chance to compete in the Search Engine Results Pages (SERPs). Months 2–3: The Growth Peak As the experiment moved into its second and third months, the momentum continued to build. Cumulative impressions across the 20 sites rose from 122,102 to 526,624, and clicks increased from 244 to 782. By the ten-week mark, 12 of the 20 sites were ranking for more than 1,000 keywords each. This phase is often what lures many digital publishers into a false sense of security. It appears that the strategy is working: the content is indexed, rankings are climbing, and traffic is trickling in. During this window, Google is essentially “testing” the content. It places the pages in front of users to see how they interact with the information. However, this growth proved to be the peak rather than the beginning of a steady climb. The Great Ranking Collapse: Months 3–6 The turning point for the experiment arrived around early February 2025, approximately three months after the initial publication. The visibility that had been building steadily began to evaporate. By the six-month mark, the results were staggering: only 3% of the pages remained in the top 100 search results, down from 28% in the first month. While the total number of impressions across the 16-month period reached over 700,000 by month six, a closer look at the data revealed a troubling trend. Roughly 75% of all total impressions and clicks were generated in the first 2.5 months. The subsequent 3.5 months saw a sharp decline in growth, with the sites adding very little to their totals. Google had effectively decided that the vast majority of this content did not deserve a place on the first few pages of search results. The pages remained indexed, meaning Google still knew they existed, but they were essentially “buried.” Without the authority of backlinks or the unique value of human expertise, the AI-generated content could not maintain its position against more established or higher-quality competitors. Long-Term Stagnation and the Impact of Spam Updates The experiment was allowed to run for a total of 16 months to see if the sites would eventually recover or if Google’s algorithms would re-evaluate the content. For over a year, visibility remained extremely low across almost all niches. There was no “bounce back” for the majority of the AI articles. However, an interesting fluctuation occurred during the rollout of the Google August 2025 spam update. During this period, 50% of the sites saw a brief two-week spike in impressions. Following the completion of the update, the percentage of pages ranking in the top 100 rose to 20%—a

Uncategorized

Google Ads API to block duplicate Lookalike user lists

Understanding the Shift in Google Ads API Data Management Google has announced a significant technical update to the Google Ads API that will fundamentally change how advertisers and developers manage Lookalike user lists. Starting April 30, 2026, the Google Ads API will begin enforcing a uniqueness check on Lookalike user lists. This change means that the system will actively block the creation of duplicate lists that share identical configurations, including seed lists, expansion levels, and country targeting. While this might appear to be a minor housekeeping update, it carries substantial implications for the ecosystem of automated advertising. For years, digital marketers and developers have often utilized redundant lists for different campaigns or experimental setups. Moving forward, Google is moving toward a more streamlined, signal-based architecture where efficiency and data hygiene are prioritized over volume. If you rely on programmatic campaign management, understanding this shift is critical to preventing technical debt and campaign downtime. What Are Lookalike User Lists in the Modern Google Ecosystem? To understand why this API change matters, we must first look at the role of Lookalike user lists in the current advertising landscape. These lists are a cornerstone of Google’s Demand Gen campaigns, which were designed to help advertisers find new customers who share similar characteristics with their existing high-value users. Lookalike segments work by taking a “seed list”—usually a Customer Match list, a list of website visitors, or app users—and using Google’s machine learning algorithms to identify other users with similar browsing habits, interests, and demographics. Advertisers typically define these segments using three key parameters: The Seed List The foundation of any Lookalike audience is the seed list. This is the first-party data provided by the advertiser. The quality of the Lookalike audience is directly proportional to the quality of the seed list. If the seed list contains your top 10% of customers by lifetime value, the Lookalike model will be far more effective than if the seed list is simply a broad collection of all site visitors. Expansion Levels Google allows advertisers to choose how closely the new audience should match the seed list. These are typically categorized as Narrow (reaching the top 2.5% of similar users), Balanced (the top 5%), and Broad (the top 10%). Different expansion levels allow for a trade-off between reach and precision. Geographic Targeting Lookalike audiences are also defined by the country or region they target. Because user behavior and demographics vary significantly across borders, a Lookalike audience based on a US seed list might behave differently when applied to a European or Asian market. Under the new API rules, if a developer attempts to create a new Lookalike list that matches an existing one across all three of these parameters, the request will be rejected. This is Google’s way of ensuring that the Ads API is not cluttered with redundant data that serves no unique purpose for the machine learning models. Technical Details: The April 30 Deadline and Error Handling The enforcement of this policy is set for April 30, 2026. This date is firm, and developers should not expect a grace period once the rollout begins. The impact will be felt primarily by those using v24 of the Google Ads API and above, though legacy versions will also see changes in how errors are reported. New Error Codes to Watch For When the uniqueness check is triggered, the API will no longer simply create a second version of the list. Instead, it will return a specific error code. Developers must update their application logic to handle these errors gracefully to avoid breaking automated workflows. v24 and Higher: The API will return the DUPLICATE_LOOKALIKE error code. This is a specific indicator that the configuration (seed, expansion, and country) already exists in the account. Earlier Versions: For those still operating on older versions of the API, the system will likely return a RESOURCE_ALREADY_EXISTS error. The danger for many agencies and in-house marketing teams lies in “silent failures.” If a script is designed to create a new audience list for every new campaign launch and doesn’t have robust error handling, the script might crash, leaving the campaign without an audience or preventing the campaign from launching entirely. Moving toward “Get or Create” logic—where the script checks for an existing list before attempting to create a new one—will become the industry standard. Why Google is Enforcing Uniqueness Checks From a strategic perspective, Google’s decision to block duplicate Lookalike lists is part of a broader trend in the advertising industry: the shift toward signal-based marketing and system efficiency. There are several reasons why Google is making this change now. Reducing Data Redundancy Every user list created in Google Ads requires computational resources to process and maintain. When an account has hundreds of identical Lookalike lists, it creates a massive amount of redundant data that Google’s servers must track. By enforcing uniqueness, Google reduces the technical overhead required to manage audience segments, leading to a faster and more stable API environment. Optimizing Machine Learning Signals In the modern era of Google Ads, “everything is a signal.” Automation works best when it has clear, distinct data points to analyze. When an advertiser uses ten identical Lookalike lists across ten different campaigns, it can actually dilute the effectiveness of the bidding algorithms. By forcing the reuse of a single, unified list, the system can better aggregate performance data and optimize the audience model more effectively. Improving Account Hygiene Large-scale advertisers often struggle with “account bloat.” Over time, accounts can become cluttered with thousands of legacy audiences, many of which are duplicates. This makes it difficult for human managers to audit accounts and for third-party tools to sync data. This change forces a level of discipline on advertisers, ensuring that the audience tab remains clean and manageable. Strategic Impact on Demand Gen Campaigns Demand Gen campaigns are specifically mentioned in the context of this update because they are the primary vehicle for Lookalike audiences. Demand Gen was introduced as a successor to Discovery ads, focusing on

Uncategorized

Search Referral Traffic Down 60% For Small Publishers, Data Shows via @sejournal, @MattGSouthern

Understanding the Crisis: The Massive Shift in Search Referral Traffic The digital publishing landscape is currently navigating one of its most turbulent eras to date. For over a decade, small to mid-sized publishers relied on a relatively predictable influx of traffic from search engines—primarily Google. However, recent data highlights a grim reality for independent creators. According to a report by Axios, citing data from the analytics firm Chartbeat, search referral traffic for small publishers has plummeted by a staggering 60% over the last two years. In stark contrast, large-scale publishers have managed to weather the storm with significantly more resilience, experiencing only a 22% decline in the same period. This disparity points toward a fundamental shift in how search engines prioritize content and how the “open web” is being restructured by algorithmic preferences. For many small business owners, niche bloggers, and independent news outlets, these figures represent more than just a dip in metrics; they represent an existential threat to their business models. To understand why this is happening and what it means for the future of the internet, we must look at the intersection of algorithmic updates, the rise of artificial intelligence, and the changing priorities of major tech platforms. The Great Divide: Why Small Publishers are Losing Ground The data from Chartbeat suggests a widening chasm between the “haves” and the “have-nots” in the digital space. When search referral traffic drops by 60%, the impact on revenue—specifically through display advertising and affiliate marketing—is catastrophic. But why are smaller entities being hit three times harder than their larger counterparts? One of the primary drivers is the evolution of Google’s ranking systems. Over the past 24 months, we have seen a series of aggressive updates, including the Helpful Content Update (HCU) and multiple Core Updates. While Google maintains that these changes are designed to reward high-quality, original content, the practical result has often been a consolidation of visibility toward “authority” brands. Large publishers often possess “domain authority” that has been built over decades. They have massive backlink profiles, established brand recognition, and the resources to pivot quickly when guidelines change. Small publishers, regardless of the quality of their reporting or the depth of their expertise, often struggle to compete with the sheer technical and historical weight of a legacy media site. In the eyes of an algorithm designed to mitigate risk, a household name is often seen as a “safer” result than a specialized independent site. The Impact of the Helpful Content Update (HCU) A significant portion of the traffic decline can be traced back to the volatility introduced by the Helpful Content Update. Initially launched to target “SEO-first” content—articles written primarily to rank rather than to inform—the update inadvertently caught many legitimate small publishers in its net. Small publishers often focus on specific niches, providing deep-dive analysis that larger outlets might overlook. However, as the algorithm shifted toward prioritizing “Experience, Expertise, Authoritativeness, and Trustworthiness” (E-E-A-T), the “Authoritativeness” pillar became a significant hurdle. For Google’s automated systems, authority is often measured by the breadth of a site’s influence and its mentions across the wider web. Independent publishers, who may lack a massive PR department to secure high-tier backlinks, found themselves sidelined in favor of “big box” media outlets that cover everything from politics to product reviews. The AI Revolution and Zero-Click Searches Beyond traditional algorithmic shifts, the rise of Generative AI has fundamentally altered the search engine results page (SERP). With the introduction of AI Overviews (formerly SGE), Google is now capable of answering user queries directly on the search page. This creates a “zero-click” environment where the user gets the information they need without ever visiting the source website. For small publishers who provide factual data, quick tips, or straightforward news, this is a devastating development. If a user asks for a specific “how-to” guide or a summary of a local event, and Google’s AI provides that summary using the small publisher’s data, the publisher loses the visit, the ad impression, and the potential for a newsletter sign-up. While large publishers also face this threat, their diversified revenue streams and direct-to-site traffic help cushion the blow. Small publishers, who often live and die by search referrals, do not have that luxury. The Collapse of Social Referrals The 60% drop in search traffic does not exist in a vacuum. It is occurring at the same time that social media platforms are retreating from the news business. For years, Facebook and X (formerly Twitter) served as secondary traffic drivers for small publishers. However, Meta has actively de-prioritized news content in the Facebook feed to avoid regulatory headaches and focus on short-form video. With social referral traffic also in a freefall, small publishers are being squeezed from both sides. When search traffic fails, there is no longer a reliable social safety net to catch the overflow. This has forced many independent outlets to reconsider their entire distribution strategy, moving away from “platform-dependent” growth toward more sustainable, direct-to-consumer models. The Visibility Paradox: Big Brands vs. Niche Experts The Chartbeat data highlights a paradox in modern SEO. Google’s documentation often encourages creators to “find their niche” and provide “unique perspectives.” Yet, the data shows that when the algorithm is applied at scale, it is the generalist, high-authority brands that are winning. This “brand bias” has led to a situation where a major news outlet writing a 500-word summary of a topic can outrank a niche expert who wrote a 3,000-word definitive guide on the same subject. For the small publisher, this feels like a betrayal of the “meritocratic” web that Google once promised. The 22% drop for large publishers is certainly not negligible, but it represents a manageable correction compared to the 60% “extinction-level” event facing smaller players. How Small Publishers Can Fight Back Despite the bleak outlook provided by the data, small publishers are not entirely without recourse. Surviving a 60% traffic drop requires a radical shift in how content is produced and distributed. Here are several strategies being employed by resilient

Uncategorized

ChatGPT ads pilot leaves advertisers without proof of ROI

The Dawn of AI Advertising and the Measurement Gap For nearly two years, the digital marketing world has buzzed with anticipation and apprehension regarding how OpenAI would eventually monetize its flagship product, ChatGPT. As the platform surged to hundreds of millions of active users, the transition from a subscription-only model to an ad-supported ecosystem seemed inevitable. However, the initial rollout of the ChatGPT ads pilot has been met with a surprising realization: one of the most advanced technology companies in human history is currently offering an advertising product that feels like a relic from a different era. Recent reports indicate that while OpenAI is aggressively moving forward with its advertising ambitions, early adopters are finding themselves in a difficult position. The primary grievance among brand managers and agency executives is a fundamental lack of proof regarding Return on Investment (ROI). In an age where digital marketing is defined by granular data, real-time attribution, and algorithmic optimization, the ChatGPT ads pilot currently operates within a “black box” that leaves advertisers guessing whether their spend is actually driving business growth. The Reality of the ChatGPT Ads Pilot According to reports from The Information and insights shared by SEO consultant Glenn Gabe, the initial pilot program for ChatGPT ads is remarkably primitive. Advertisers entering this space are not meeting a sophisticated ad manager interface like those provided by Google or Meta. Instead, they are encountering a manual, labor-intensive process that lacks the basic infrastructure required for modern performance marketing. Currently, the “big picture” for ChatGPT’s ad product is one of limited visibility. The platform shares almost no actionable data with its partners. There are no automated buying tools, meaning that transactions aren’t happening through a programmatic bidding system. Instead, deals are being brokered through a series of phone calls, email chains, and shared spreadsheets. This manual approach is a far cry from the instantaneous, data-driven auctions that define the rest of the digital advertising landscape. Challenges Facing Early Adopters For the agencies and brands that have participated in the pilot, the experience has been a lesson in frustration. Several key obstacles have emerged that make it nearly impossible to justify long-term spending on the platform at this stage: Lack of Automated Infrastructure: Without a self-service dashboard or automated API for ad placement, the process of launching and managing campaigns is inefficient. This prevents brands from scaling their efforts or making real-time adjustments based on performance. Missing Performance Data: Advertisers thrive on metrics. They need to know click-through rates (CTR), conversion rates, cost-per-acquisition (CPA), and customer journey mapping. Reports suggest that OpenAI provides minimal data, making it impossible to evaluate outcomes with any degree of certainty. Inability to Prove Results: Two agency executives speaking to The Information noted that they were unable to provide their clients with definitive proof that ChatGPT ads drove any measurable business results. Without this proof, the “experimental” budget quickly dries up. The Irony of Advanced AI and Spreadsheet-Era Reporting There is a profound irony in the current state of OpenAI’s advertising business. OpenAI has pioneered the most sophisticated Large Language Models (LLMs) in the world, capable of writing code, composing poetry, and solving complex reasoning problems in seconds. Yet, when it comes to the business side of their platform—specifically the reporting and analytics for their ad partners—they appear to be stuck in the “spreadsheet era.” This disconnect highlights a common growing pain for technology-first companies. Building a world-class consumer product is not the same as building a world-class advertising platform. Google and Meta spent decades refining their tracking pixels, attribution windows, and reporting dashboards. OpenAI is attempting to bridge that gap in a matter of months, and the cracks are beginning to show. For the time being, the sophisticated AI under the hood of ChatGPT is not being utilized to help advertisers understand their audience or the impact of their creative assets. Scaling to Millions: The Expansion Plans Despite these early teething problems, OpenAI is not slowing down. The company has informed advertisers of its intention to scale ads to all U.S. users on the free and low-cost ChatGPT tiers in the coming weeks. This represents a massive expansion of inventory. Millions of additional eyeballs will soon see sponsored content within their chat interfaces. OpenAI’s advice to advertisers to improve performance in the meantime is relatively simple: supply more variations of text and visual creative. The theory is that more variety will allow the system to better match content to user queries. However, without the data to show which variations are actually working, advertisers are essentially doubling down on a “spray and pray” strategy, hoping that something sticks without ever being able to confirm what it was. The Risks of Scaling Without Measurement Expanding an ad product before the measurement tools are ready is a risky move. While it allows OpenAI to start capturing revenue immediately, it risks alienating the very brands it needs to build a sustainable ecosystem. If a brand spends $100,000 on ChatGPT ads and cannot see a single conversion or meaningful engagement metric, they are unlikely to return for a second campaign. For the digital marketing community, this expansion signals a transition from a closed pilot to a broader “beta” phase. While the audience size is growing, the maturity of the product is not yet matching that scale. Advertisers are being asked to pay for reach while being denied the tools to measure the value of that reach. Why Digital Marketers Should Care For SEO professionals, digital marketers, and brand stakeholders, the ChatGPT ads saga is a cautionary tale about the “shiny object” syndrome. The allure of being “first” on a platform as revolutionary as ChatGPT is strong, but it comes at a significant cost. If you are considering ChatGPT as a new ad channel, you must understand the current limitations. Spending Blind In the current state of the pilot, you are essentially spending blind. There is no reliable way to prove ROI to stakeholders. In an era where marketing budgets are under constant

Uncategorized

Why zero-click search doesn’t mean zero influence

Why zero-click search doesn’t mean zero influence The digital marketing landscape is currently navigating one of the most significant structural shifts since the invention of the search engine. During a recent keynote at the Industrial Marketing Summit, SparkToro co-founder Rand Fishkin reignited a long-standing debate by arguing that we are now firmly operating in a “zero-click world.” On the surface, the data supports this: a massive percentage of Google searches now end without a single click to an external website. Between featured snippets, local map packs, and the rapid rollout of AI Overviews, the search engine results page (SERP) has transformed from a list of doorways into a destination in its own right. For many SEOs and digital publishers, this trend feels like an existential threat. If users are finding their answers directly on Google, Reddit, or through a ChatGPT prompt, the traditional value proposition of a website—as a driver of measurable traffic—seems to be evaporating. However, looking only at click-through rates (CTR) provides a narrow and increasingly inaccurate view of how digital influence actually works in the modern era. The deeper reality is that while clicks may be declining, the structural importance of high-quality, original content is actually increasing. To understand why zero-click search doesn’t mean zero influence, we have to look past the surface-level metrics and examine how information is evaluated, synthesized, and trusted across the modern web ecosystem. In this new environment, websites are no longer just destinations; they are the fundamental training data and authority signals that power the entire AI-driven information pipeline. Why ‘zero-click’ discussions often lead to the wrong conclusion From a purely analytical perspective, the zero-click trend is undeniable. Search engines have evolved to prioritize user convenience, which often means answering a query as quickly as possible. If a user wants to know the “best time to plant tomatoes in Zone 7,” Google provides a direct answer. If they want to know a company’s stock price or the result of last night’s game, the data is presented instantly. The user is satisfied, but the publisher receives no visit. The rise of AI assistants and large language models (LLMs) has accelerated this. These tools synthesize answers from dozens of sources, presenting a cohesive narrative that removes the need for the user to visit individual links. This shift disrupts the traditional “traffic-first” model of SEO that has dominated the industry for over twenty years. When visibility no longer translates into a visit recorded in GA4, many marketers conclude that the website matters less. This is a fundamental miscalculation. The conclusion that websites are losing importance is an incomplete assessment of the information ecosystem. Large language models and AI-driven search interfaces do not create knowledge out of thin air; they rely on probabilistic signals drawn from the open web. They evaluate truth through consistency and authority. When a brand’s message appears consistently across multiple independent, high-quality sources, the statistical likelihood that the information is correct—and therefore worth repeating—increases. In this context, visibility is no longer just about the click; it is about being the “source of truth” that the AI chooses to relay. The evolution of visibility signals Historically, we used traffic to forecast performance. If we ranked for a keyword with 10,000 monthly searches and had a 10% CTR, we knew we’d get 1,000 visits. In a zero-click world, that math breaks. However, the influence remains. If 10,000 people see your brand name cited as the authority in an AI Overview, your brand has still gained 10,000 impressions of high-intent authority. This “invisible” visibility shapes consumer perception and feeds the top of the funnel in ways that traditional analytics struggle to capture. Fishkin is right about the trend Rand Fishkin’s observation about the “fragmentation of discovery” accurately describes the modern user journey. We no longer live in a world where search begins and ends with a blue link. Information consumption is now distributed across a massive variety of environments: AI Overviews: Search engines synthesize complex answers at the top of the page. Social Discovery: Platforms like TikTok and LinkedIn have become research engines where users search for product reviews or professional advice. Community Forums: Reddit and Discord act as bastions of human-first, experiential knowledge that AI often prioritizes. Vertical Search: Amazon for products, YouTube for “how-to” content, and specialized industry databases. When a user encounters a professional insight on LinkedIn or a product recommendation in a Reddit thread, they may never visit the original creator’s website. From a traditional analytics standpoint, this looks like a failure or a lost opportunity. But from a brand perspective, it is a successful touchpoint. The underlying knowledge that fueled that Reddit conversation or LinkedIn post had to originate somewhere. The environments where people consume information are expanding, but the demand for primary, authoritative data has never been higher. Zero-click doesn’t mean zero influence To succeed in the current landscape, marketers must understand the critical distinction between traffic and information influence. While traffic measures whether a user landed on your URL, influence measures whether your expertise shaped the answer the user received, regardless of where they saw it. AI systems are essentially advanced pattern-matching engines. When an LLM answers a question about a technical concept, a legal strategy, or a marketing tactic, it isn’t “thinking.” It is constructing a response based on patterns learned from the web. It draws on the analysis, explanations, and original thought leadership that publishers have placed online. If your website is the primary source of a specific methodology or a unique set of data, the AI will use your “information fingerprint” to construct its answer. Even in a zero-click environment, those primary sources are the anchors of the ecosystem. Influence occurs earlier in the pipeline. If a user asks an AI, “What is the best way to scale a SaaS business?” and the AI uses your framework to answer, you have influenced that user’s strategy. They now associate your concepts with the solution to their problem. While you didn’t get the click today, you

Uncategorized

Why ‘search everywhere’ is the new reality for SEO

Why ‘search everywhere’ is the new reality for SEO For decades, the search engine optimization industry has been defined by a single, monolithic goal: ranking on the first page of Google. Marketers obsessed over the “ten blue links,” fine-tuning meta tags and backlink profiles to appease a single algorithm. However, the digital landscape has undergone a seismic shift. Today, the most pressing conversations in SEO circles revolve around Artificial Intelligence (AI)—specifically the rise of AI Overviews, ChatGPT, and large language models (LLMs). There is a palpable fear that these generative technologies are cannibalizing traffic, forcing brands to pivot toward Generative Engine Optimization (GEO) or Answer Engine Optimization (AEO). While the concern regarding AI-driven traffic loss is statistically valid—particularly for informational, top-of-funnel content—it masks a much larger and more fundamental change in human behavior. The real evolution isn’t just about how AI interprets data; it is about where users are going to find information in the first place. User behavior has fragmented across a dozen different ecosystems, from social media to retail giants. We have entered an era where “search everywhere” is no longer a luxury or a niche strategy; it is the new reality for digital survival. The Fragmentation of the Modern Search Journey The traditional search funnel used to be linear: a user had a problem, they went to Google, they clicked a link, and they found a solution. That journey has been shattered. Today, discovery happens in real-time, across platforms that were never originally intended to be search engines. When a user wants to find a new restaurant, they search TikTok to see the ambiance and the food in motion. When they need to fix a broken appliance, they head to YouTube for a visual tutorial. When they want an unbiased review of a tech product, they append “Reddit” to their query or search the platform directly to avoid the polished marketing fluff of corporate websites. And when they are ready to buy, they often bypass search engines entirely, starting their journey on Amazon. This shift represents more than just a change in habit; it is reflected in hard traffic data. Recent research, including an analysis of 41 websites with significant search activity by SparkToro and Datos, highlights a startling trend. In Q4 of 2025, platforms like Amazon and YouTube continued to drive significantly more desktop traffic and search activity than ChatGPT. While LLMs are growing, they are not yet the primary disruptors of traditional search—fragmentation across specialized platforms is. Rethinking the Competitive Landscape One of the biggest mistakes a modern brand can make is assuming their only competitors are the companies selling the same products or services. In a “search everywhere” world, your competitors are often content creators, community hubs, and media platforms that occupy the digital real estate your audience frequents. In a recent share of voice analysis conducted for a major client, the objective was to identify who was winning in traditional search across multiple service lines and to map out a content roadmap to fill those gaps. The results were eye-opening. While the client expected to see their direct business rivals at the top of the list, the analysis revealed that their biggest competitors for visibility were actually YouTube and Reddit. These third-party platforms are not just “social sites”; they are search powerhouses that rank exceptionally well in traditional Search Engine Results Pages (SERPs). They take up valuable real estate, such as video carousels and “Discussions and Forums” modules. When a user clicks a Reddit thread or a YouTube video from a Google result, they are funneled away from the traditional web and into a proprietary ecosystem. If your brand does not have a presence on these platforms, you are effectively invisible to a massive segment of your target market, regardless of how well your website’s blog is optimized. The Power of In-Platform Search Volume Understanding the “search everywhere” reality requires looking beyond Google’s keyword tools. Depending on the intent behind a query, there may be far more search volume occurring within a specific platform than on all traditional search engines combined. This is particularly true for “how-to” and educational content. Take, for example, the query “how to fix a leaky sink faucet.” Data from tools like Semrush and vidIQ suggest that this specific term can have up to 15 times more search volume on YouTube than on traditional search engines globally. For a homeowner standing in a puddle of water, a 1,500-word blog post is less helpful than a three-minute video showing exactly which wrench to use and which direction to turn it. The takeaway for SEOs is clear: if your content strategy is restricted to text-based articles, you are capping your potential reach. To be truly “search everywhere” friendly, a holistic approach is required. For a topic like home repair, the strategy should involve creating a high-quality YouTube video and then embedding that video within a comprehensive blog post. This allows you to capture traffic from YouTube’s internal search, Google’s video carousels, and traditional organic listings simultaneously. The Influence of Social Platforms on AI Citations The “search everywhere” phenomenon also dictates how AI models like ChatGPT, Claude, and Gemini perceive your brand. LLMs do not generate answers in a vacuum; they synthesize information from a vast web of data. Crucially, they do not just look at your own website to understand who you are or what you do. In fact, they often prioritize third-party sources to establish a “consensus.” AI visibility tools provide a window into how these citations work. In multiple analyses of major brands, a consistent pattern emerges: a very small percentage of AI citations (often less than 10%) come from the brand’s own website or those of its direct competitors. Instead, nearly 90% of citations originate from: Third-party news and online publications. Social media platforms (LinkedIn, X, TikTok). Forum platforms like Reddit and Quora. Niche review sites and industry aggregators. This creates a new challenge for SEOs: the “Consensus Layer.” If you want an AI to recommend your

Scroll to Top