Uncategorized

Uncategorized

SEO Test Shows It’s Trivial To Rank Misinformation On Google via @sejournal, @martinibuster

Understanding the Vulnerability of Modern Search Results In the rapidly evolving landscape of digital information, Google has long positioned itself as the ultimate arbiter of truth. Through its complex ranking algorithms and initiatives like E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness), the search giant aims to prioritize high-quality, factual content. However, a recent and startling SEO experiment has demonstrated that the system is far more fragile than many realized. The test confirms that ranking blatant misinformation is not only possible but, in many cases, trivial for those who understand the mechanics of search engine optimization. The implications of this experiment are profound, touching on everything from political discourse and public health to the future of AI-driven search. When a search engine becomes a megaphone for falsehoods, the entire ecosystem of digital trust begins to crumble. This deep dive explores how the experiment was conducted, why Google’s sophisticated algorithms were bypassed, and what this means for the future of the internet. The Mechanics of the Misinformation Experiment The core of the SEO test involved a relatively straightforward but clever methodology. Researchers and SEO experts, including those documented by Roger Montti, sought to determine if a completely fabricated “fact” could not only rank on the first page of Google but also be accepted by the algorithm as a definitive answer. By creating content around a non-existent event or a false historical detail, the testers eliminated the obstacle of competing with established, factual sources. In a typical search scenario, Google compares new information against a vast index of known facts. However, when a “new” fact is introduced—something that hasn’t been written about before—the algorithm lacks a baseline for verification. If the fabricated content is presented on a site with decent technical SEO, proper internal linking, and clear headings, Google’s crawlers often treat it as a “fresh” and “relevant” discovery rather than a potential lie. The experiment proved that the algorithm prioritizes structural signals—such as keyword placement, schema markup, and mobile responsiveness—over the literal truth of the text. Once the fake information was indexed, it didn’t just sit in the back pages of the search results; it climbed to the top, often appearing in featured snippets or as a primary answer for specific queries. Why Google’s Algorithms Fall for Fabricated Content It may seem surprising that a multi-billion dollar AI infrastructure can be fooled by a simple lie. To understand why this happens, we must look at how Google defines “quality.” Google does not have a “truth” sensor; instead, it uses a series of proxies to estimate the likelihood that a page is helpful. These proxies are where the system becomes vulnerable. The Problem with Freshness and Uniqueness Google’s “Query Deserves Freshness” (QDF) and its preference for unique content are two pillars of its ranking system. When an SEO professional creates a unique lie, they are providing the algorithm with something it hasn’t seen before. Since the algorithm is trained to value “original research” and “new insights,” it may inadvertently reward misinformation because there is no contradictory data to flag it as false. In the eyes of a bot, a unique lie can look more valuable than a repetitive truth. The Semantic Trap Modern search is semantic, meaning it tries to understand the intent and relationships between words rather than just matching keywords. If a piece of misinformation is written in a professional, authoritative tone and uses “entities” (names, dates, and locations) that Google recognizes, the algorithm perceives a high level of topical relevance. The lie is effectively “hidden” inside a shell of high-quality SEO writing, making it indistinguishable from a well-researched article to an automated crawler. Reliance on Structural Authority Search engines place significant weight on the technical health of a website. If a fabricated story is published on a domain with a clean history, fast loading speeds, and a secure HTTPS connection, it gains an immediate advantage. The algorithm assumes that a site which follows technical best practices is more likely to provide reliable content. This experiment highlights a dangerous gap: technical proficiency is not a guarantee of editorial integrity. The Ripple Effect: How Misinformation Spreads Beyond Search Perhaps the most concerning discovery from the SEO test was how quickly the misinformation spread to other platforms. The internet is no longer a collection of isolated websites; it is a giant, interconnected feedback loop. Once Google validates a piece of misinformation by ranking it highly, it sets off a chain reaction that is incredibly difficult to stop. The Role of Scraper Sites and Aggregators The web is populated by thousands of automated “scraper” sites that monitor high-ranking search results to generate their own content. When the fake fact appeared at the top of Google, these bots automatically copied the information, reworded it, and published it on their own domains. Within hours, a single lie can be mirrored across dozens of websites, creating a false sense of consensus. When Google sees the same “fact” appearing on multiple sites, its confidence in the accuracy of that fact actually increases, further cementing the misinformation’s rank. The AI Training Loop This experiment has dire consequences for Large Language Models (LLMs) like ChatGPT, Claude, and Google’s own Gemini. These AI models are trained on data scraped from the open web. If misinformation is allowed to rank and proliferate on Google, it inevitably ends up in the training sets for future AI. This leads to “model collapse” or “hallucination amplification,” where AI systems confidently state falsehoods because they encountered them multiple times during their training phase. AI Overviews and Featured Snippets Google’s AI Overviews (formerly SGE) aim to summarize search results for users. However, these overviews are only as good as the sources they cite. The SEO test showed that Google’s AI summary tools are just as susceptible to misinformation as the standard organic results. If a fabricated article ranks #1, the AI summary will often use that article as its primary source, presenting the lie as a definitive, Google-sanctioned answer. Most users never click past the summary, meaning

Uncategorized

EU signals imminent decision on Google DMA probe

EU signals imminent decision on Google DMA probe The regulatory landscape for global tech giants is shifting once again as the European Union prepares to deliver a potentially landmark ruling. After months of anticipation and mounting pressure from industry stakeholders, the EU’s top antitrust official has signaled that a decision regarding Google’s compliance with the Digital Markets Act (DMA) is imminent. While a specific date remains unconfirmed, the message from Brussels is clear: the period of observation is ending, and the era of enforcement is beginning. The Digital Markets Act was designed to curb the dominance of “gatekeeper” platforms and ensure a fair, competitive environment for smaller businesses and consumers. As Google maintains a commanding share of the search market in Europe—exceeding 90% in most member states—the outcome of this probe carries immense weight for the future of search engine optimization (SEO), digital advertising, and the burgeoning field of generative AI. The Stakes of the Impending Decision Teresa Ribera, the European Commission’s Competition Commissioner, recently addressed the status of the investigation. In comments made to Dow Jones Newswires, Ribera stated, “It will come,” referring to the final decision on the Google probe. She emphasized that the cases are inherently complex, requiring a meticulous review of evidence and a commitment to fair procedure. This careful approach, while legally necessary, has been a source of frustration for those who feel Google has been allowed to operate with an unfair advantage for too long. The investigation, which officially launched in March 2024, focuses on whether Google’s search results and app store practices unfairly favor its own services over those of competitors. This concept, known as “self-preferencing,” is a core violation under the DMA framework. If the Commission finds Google in breach of these regulations, the consequences could include massive fines—up to 10% of the company’s global annual turnover—and mandated structural changes to how Google displays information to hundreds of millions of European users. Why the Google Probe is Unique While the European Commission has already taken action against other tech titans like Meta and Apple under the DMA, the Google investigation has proven to be a more intricate puzzle. The search giant’s ecosystem is deeply integrated into the daily lives of both consumers and businesses, making any forced changes technically and economically significant. Meta has faced scrutiny over its “pay or consent” model, and Apple has been fined for its “steering” rules that prevented developers from informing users of cheaper alternatives outside the App Store. In contrast, Google’s probe touches upon the very architecture of the open web. The way Google ranks websites, displays shopping results, and now integrates AI-generated answers directly into search results (AI Overviews) is under the microscope. The Commission must balance the need for competition with the functional requirements of a high-quality search engine. Mounting Pressure from Advocacy Groups The delay in reaching a decision has not gone unnoticed. This month, a coalition of 18 lobby and civil society groups sent a formal letter to Commissioner Ribera, demanding swift and decisive action. The groups argue that the Commission’s credibility is at stake. They contend that every day the status quo remains, European businesses are being systematically disadvantaged by a search algorithm that they claim prioritizes Google’s own interests. The letter highlights a critical concern for the SEO community: if a gatekeeper can control the flow of traffic with impunity, the incentive for independent businesses to invest in high-quality web content diminishes. The advocates are calling for “clear remedies” that go beyond mere financial penalties. They want to see fundamental shifts in how Google presents search results, ensuring that vertical search services (such as travel, local business, and shopping engines) are given a fair chance to appear alongside Google’s own offerings. The AI Factor: AI Overviews and Content Rights Perhaps the most modern and controversial aspect of the EU’s scrutiny involves how Google utilizes data to power its AI Overviews. As Google moves toward an “answer engine” model rather than a “link engine” model, publishers are raising alarms about content theft and the loss of referral traffic. The European Commission is separately investigating how Google ranks news publishers and how it uses third-party content to train and display AI-generated summaries. Under the DMA, gatekeepers are prohibited from using the data of business users to compete against them. If Google’s AI Overviews are found to be scraping content from publishers to keep users on Google’s own pages—thereby depriving those publishers of ad revenue and visitor data—it could constitute a major violation of the DMA. For SEO professionals and content creators, this ruling could determine the viability of their business models. If the EU mandates that Google must provide more transparency or compensation for the use of publisher data in AI, it could set a global precedent for how the relationship between AI developers and content creators is governed. Ribera’s High-Stakes Meetings in the US The timing of Ribera’s announcement is no coincidence. The Competition Commissioner is currently on a high-profile tour of the United States, meeting with the leaders of the tech world’s most powerful companies. Her itinerary includes sessions with Alphabet’s Sundar Pichai, Meta’s Mark Zuckerberg, OpenAI’s Sam Altman, and Amazon’s Andy Jassy. These meetings suggest that the EU is looking for more than just compliance on a case-by-case basis; they are looking to shape the long-term behavior of these digital gatekeepers. Additionally, Ribera is scheduled for talks in Washington, D.C., with the acting head of the U.S. Justice Department’s antitrust division. This cross-Atlantic dialogue is crucial, as the DOJ is currently pursuing its own landmark antitrust case against Google in the United States, focusing on its search and ad tech dominance. Coordination between the EU and US regulators could create a unified front that makes it much harder for Google to maintain its current business practices. Potential Impact on the Digital Advertising Ecosystem For advertisers, a ruling against Google under the DMA could be transformative. The EU is looking closely at how Google’s ad tech stack operates and

Uncategorized

How AI-generated content performs in Google Search: A 16-month experiment

How AI-generated content performs in Google Search: A 16-month experiment The rise of Generative AI has fundamentally changed the landscape of content marketing and Search Engine Optimization (SEO). Today, a single person can generate hundreds of high-quality-looking articles in a matter of hours, a task that once took a team of writers months to accomplish. However, the ease of production has led to a critical question for digital publishers: does this content actually provide long-term value in the eyes of Google? To answer this, a comprehensive 16-month experiment was conducted in collaboration with the research team at SE Ranking. The goal was to move beyond anecdotal evidence and track the performance of raw, unedited AI content on brand-new domains with zero existing authority. The findings suggest that while AI can provide a quick burst of visibility, the road to long-term search success is far more complex than simply hitting “generate.” The Methodology: Setting the Stage for the Experiment The core objective of this study was to observe the natural lifecycle of AI-generated content without the interference of human optimization. Many SEO experts argue that AI content only works when heavily edited or paired with a strong backlink profile. This experiment stripped away those variables to see how the content performed on its own merit. The team purchased 20 brand-new domains, ensuring there was no previous search history, brand recognition, or existing backlink profile that could skew the results. Each domain was dedicated to a specific niche to provide a broad look at how Google handles different topics. The niches included: Arts & Entertainment Business & Services Community & Society Computers & Technology Ecommerce & Shopping Finance & Accounting Food & Drink Games & Accessories Health & Medicine Industry & Engineering Hobbies & Interests Home & Garden Jobs & Career Law & Government Lifestyle & Well-being Pets & Animals Science & Education Sports & Fitness Travel & Tourism Vehicles & Boats For each of these 20 niches, the researchers identified 100 informational “how-to” keywords. These were specifically chosen as long-tail terms with lower competition, which typically offer the easiest path to ranking for new websites. In total, 2,000 AI-generated articles were published across the network of sites. No human editing, rewriting, or enhancement was performed. Once published, the sites were added to Google Search Console, sitemaps were submitted, and the pages were left untouched to observe their organic performance over 16 months. Early Success: The Indexing and Visibility Phase The initial results were surprisingly positive, leading some to believe that “AI spam” might actually be a viable strategy. Within the first 36 days, Google showed a high willingness to crawl and index the new content. Approximately 71% of the 2,000 pages (1,419 articles) were indexed within just over a month. For brand-new domains with zero authority, this is a remarkably high success rate. During this first month, the collective network of sites generated 122,102 impressions and 244 clicks. More impressively, 80% of the sites were already ranking for at least 100 keywords. Some niches saw explosive early interest. The “Hobbies & Interests” domain led the pack with over 17,000 impressions, followed closely by “Business & Services” and “Travel & Tourism.” This early performance indicates that Google’s initial assessment of content is often based on relevance and basic SEO structure. Because the AI-generated content followed a logical “how-to” format and targeted low-competition keywords, Google initially gave it a chance to compete in the Search Engine Results Pages (SERPs). Months 2–3: The Growth Peak As the experiment moved into its second and third months, the momentum continued to build. Cumulative impressions across the 20 sites rose from 122,102 to 526,624, and clicks increased from 244 to 782. By the ten-week mark, 12 of the 20 sites were ranking for more than 1,000 keywords each. This phase is often what lures many digital publishers into a false sense of security. It appears that the strategy is working: the content is indexed, rankings are climbing, and traffic is trickling in. During this window, Google is essentially “testing” the content. It places the pages in front of users to see how they interact with the information. However, this growth proved to be the peak rather than the beginning of a steady climb. The Great Ranking Collapse: Months 3–6 The turning point for the experiment arrived around early February 2025, approximately three months after the initial publication. The visibility that had been building steadily began to evaporate. By the six-month mark, the results were staggering: only 3% of the pages remained in the top 100 search results, down from 28% in the first month. While the total number of impressions across the 16-month period reached over 700,000 by month six, a closer look at the data revealed a troubling trend. Roughly 75% of all total impressions and clicks were generated in the first 2.5 months. The subsequent 3.5 months saw a sharp decline in growth, with the sites adding very little to their totals. Google had effectively decided that the vast majority of this content did not deserve a place on the first few pages of search results. The pages remained indexed, meaning Google still knew they existed, but they were essentially “buried.” Without the authority of backlinks or the unique value of human expertise, the AI-generated content could not maintain its position against more established or higher-quality competitors. Long-Term Stagnation and the Impact of Spam Updates The experiment was allowed to run for a total of 16 months to see if the sites would eventually recover or if Google’s algorithms would re-evaluate the content. For over a year, visibility remained extremely low across almost all niches. There was no “bounce back” for the majority of the AI articles. However, an interesting fluctuation occurred during the rollout of the Google August 2025 spam update. During this period, 50% of the sites saw a brief two-week spike in impressions. Following the completion of the update, the percentage of pages ranking in the top 100 rose to 20%—a

Uncategorized

Google Ads API to block duplicate Lookalike user lists

Understanding the Shift in Google Ads API Data Management Google has announced a significant technical update to the Google Ads API that will fundamentally change how advertisers and developers manage Lookalike user lists. Starting April 30, 2026, the Google Ads API will begin enforcing a uniqueness check on Lookalike user lists. This change means that the system will actively block the creation of duplicate lists that share identical configurations, including seed lists, expansion levels, and country targeting. While this might appear to be a minor housekeeping update, it carries substantial implications for the ecosystem of automated advertising. For years, digital marketers and developers have often utilized redundant lists for different campaigns or experimental setups. Moving forward, Google is moving toward a more streamlined, signal-based architecture where efficiency and data hygiene are prioritized over volume. If you rely on programmatic campaign management, understanding this shift is critical to preventing technical debt and campaign downtime. What Are Lookalike User Lists in the Modern Google Ecosystem? To understand why this API change matters, we must first look at the role of Lookalike user lists in the current advertising landscape. These lists are a cornerstone of Google’s Demand Gen campaigns, which were designed to help advertisers find new customers who share similar characteristics with their existing high-value users. Lookalike segments work by taking a “seed list”—usually a Customer Match list, a list of website visitors, or app users—and using Google’s machine learning algorithms to identify other users with similar browsing habits, interests, and demographics. Advertisers typically define these segments using three key parameters: The Seed List The foundation of any Lookalike audience is the seed list. This is the first-party data provided by the advertiser. The quality of the Lookalike audience is directly proportional to the quality of the seed list. If the seed list contains your top 10% of customers by lifetime value, the Lookalike model will be far more effective than if the seed list is simply a broad collection of all site visitors. Expansion Levels Google allows advertisers to choose how closely the new audience should match the seed list. These are typically categorized as Narrow (reaching the top 2.5% of similar users), Balanced (the top 5%), and Broad (the top 10%). Different expansion levels allow for a trade-off between reach and precision. Geographic Targeting Lookalike audiences are also defined by the country or region they target. Because user behavior and demographics vary significantly across borders, a Lookalike audience based on a US seed list might behave differently when applied to a European or Asian market. Under the new API rules, if a developer attempts to create a new Lookalike list that matches an existing one across all three of these parameters, the request will be rejected. This is Google’s way of ensuring that the Ads API is not cluttered with redundant data that serves no unique purpose for the machine learning models. Technical Details: The April 30 Deadline and Error Handling The enforcement of this policy is set for April 30, 2026. This date is firm, and developers should not expect a grace period once the rollout begins. The impact will be felt primarily by those using v24 of the Google Ads API and above, though legacy versions will also see changes in how errors are reported. New Error Codes to Watch For When the uniqueness check is triggered, the API will no longer simply create a second version of the list. Instead, it will return a specific error code. Developers must update their application logic to handle these errors gracefully to avoid breaking automated workflows. v24 and Higher: The API will return the DUPLICATE_LOOKALIKE error code. This is a specific indicator that the configuration (seed, expansion, and country) already exists in the account. Earlier Versions: For those still operating on older versions of the API, the system will likely return a RESOURCE_ALREADY_EXISTS error. The danger for many agencies and in-house marketing teams lies in “silent failures.” If a script is designed to create a new audience list for every new campaign launch and doesn’t have robust error handling, the script might crash, leaving the campaign without an audience or preventing the campaign from launching entirely. Moving toward “Get or Create” logic—where the script checks for an existing list before attempting to create a new one—will become the industry standard. Why Google is Enforcing Uniqueness Checks From a strategic perspective, Google’s decision to block duplicate Lookalike lists is part of a broader trend in the advertising industry: the shift toward signal-based marketing and system efficiency. There are several reasons why Google is making this change now. Reducing Data Redundancy Every user list created in Google Ads requires computational resources to process and maintain. When an account has hundreds of identical Lookalike lists, it creates a massive amount of redundant data that Google’s servers must track. By enforcing uniqueness, Google reduces the technical overhead required to manage audience segments, leading to a faster and more stable API environment. Optimizing Machine Learning Signals In the modern era of Google Ads, “everything is a signal.” Automation works best when it has clear, distinct data points to analyze. When an advertiser uses ten identical Lookalike lists across ten different campaigns, it can actually dilute the effectiveness of the bidding algorithms. By forcing the reuse of a single, unified list, the system can better aggregate performance data and optimize the audience model more effectively. Improving Account Hygiene Large-scale advertisers often struggle with “account bloat.” Over time, accounts can become cluttered with thousands of legacy audiences, many of which are duplicates. This makes it difficult for human managers to audit accounts and for third-party tools to sync data. This change forces a level of discipline on advertisers, ensuring that the audience tab remains clean and manageable. Strategic Impact on Demand Gen Campaigns Demand Gen campaigns are specifically mentioned in the context of this update because they are the primary vehicle for Lookalike audiences. Demand Gen was introduced as a successor to Discovery ads, focusing on

Uncategorized

Search Referral Traffic Down 60% For Small Publishers, Data Shows via @sejournal, @MattGSouthern

Understanding the Crisis: The Massive Shift in Search Referral Traffic The digital publishing landscape is currently navigating one of its most turbulent eras to date. For over a decade, small to mid-sized publishers relied on a relatively predictable influx of traffic from search engines—primarily Google. However, recent data highlights a grim reality for independent creators. According to a report by Axios, citing data from the analytics firm Chartbeat, search referral traffic for small publishers has plummeted by a staggering 60% over the last two years. In stark contrast, large-scale publishers have managed to weather the storm with significantly more resilience, experiencing only a 22% decline in the same period. This disparity points toward a fundamental shift in how search engines prioritize content and how the “open web” is being restructured by algorithmic preferences. For many small business owners, niche bloggers, and independent news outlets, these figures represent more than just a dip in metrics; they represent an existential threat to their business models. To understand why this is happening and what it means for the future of the internet, we must look at the intersection of algorithmic updates, the rise of artificial intelligence, and the changing priorities of major tech platforms. The Great Divide: Why Small Publishers are Losing Ground The data from Chartbeat suggests a widening chasm between the “haves” and the “have-nots” in the digital space. When search referral traffic drops by 60%, the impact on revenue—specifically through display advertising and affiliate marketing—is catastrophic. But why are smaller entities being hit three times harder than their larger counterparts? One of the primary drivers is the evolution of Google’s ranking systems. Over the past 24 months, we have seen a series of aggressive updates, including the Helpful Content Update (HCU) and multiple Core Updates. While Google maintains that these changes are designed to reward high-quality, original content, the practical result has often been a consolidation of visibility toward “authority” brands. Large publishers often possess “domain authority” that has been built over decades. They have massive backlink profiles, established brand recognition, and the resources to pivot quickly when guidelines change. Small publishers, regardless of the quality of their reporting or the depth of their expertise, often struggle to compete with the sheer technical and historical weight of a legacy media site. In the eyes of an algorithm designed to mitigate risk, a household name is often seen as a “safer” result than a specialized independent site. The Impact of the Helpful Content Update (HCU) A significant portion of the traffic decline can be traced back to the volatility introduced by the Helpful Content Update. Initially launched to target “SEO-first” content—articles written primarily to rank rather than to inform—the update inadvertently caught many legitimate small publishers in its net. Small publishers often focus on specific niches, providing deep-dive analysis that larger outlets might overlook. However, as the algorithm shifted toward prioritizing “Experience, Expertise, Authoritativeness, and Trustworthiness” (E-E-A-T), the “Authoritativeness” pillar became a significant hurdle. For Google’s automated systems, authority is often measured by the breadth of a site’s influence and its mentions across the wider web. Independent publishers, who may lack a massive PR department to secure high-tier backlinks, found themselves sidelined in favor of “big box” media outlets that cover everything from politics to product reviews. The AI Revolution and Zero-Click Searches Beyond traditional algorithmic shifts, the rise of Generative AI has fundamentally altered the search engine results page (SERP). With the introduction of AI Overviews (formerly SGE), Google is now capable of answering user queries directly on the search page. This creates a “zero-click” environment where the user gets the information they need without ever visiting the source website. For small publishers who provide factual data, quick tips, or straightforward news, this is a devastating development. If a user asks for a specific “how-to” guide or a summary of a local event, and Google’s AI provides that summary using the small publisher’s data, the publisher loses the visit, the ad impression, and the potential for a newsletter sign-up. While large publishers also face this threat, their diversified revenue streams and direct-to-site traffic help cushion the blow. Small publishers, who often live and die by search referrals, do not have that luxury. The Collapse of Social Referrals The 60% drop in search traffic does not exist in a vacuum. It is occurring at the same time that social media platforms are retreating from the news business. For years, Facebook and X (formerly Twitter) served as secondary traffic drivers for small publishers. However, Meta has actively de-prioritized news content in the Facebook feed to avoid regulatory headaches and focus on short-form video. With social referral traffic also in a freefall, small publishers are being squeezed from both sides. When search traffic fails, there is no longer a reliable social safety net to catch the overflow. This has forced many independent outlets to reconsider their entire distribution strategy, moving away from “platform-dependent” growth toward more sustainable, direct-to-consumer models. The Visibility Paradox: Big Brands vs. Niche Experts The Chartbeat data highlights a paradox in modern SEO. Google’s documentation often encourages creators to “find their niche” and provide “unique perspectives.” Yet, the data shows that when the algorithm is applied at scale, it is the generalist, high-authority brands that are winning. This “brand bias” has led to a situation where a major news outlet writing a 500-word summary of a topic can outrank a niche expert who wrote a 3,000-word definitive guide on the same subject. For the small publisher, this feels like a betrayal of the “meritocratic” web that Google once promised. The 22% drop for large publishers is certainly not negligible, but it represents a manageable correction compared to the 60% “extinction-level” event facing smaller players. How Small Publishers Can Fight Back Despite the bleak outlook provided by the data, small publishers are not entirely without recourse. Surviving a 60% traffic drop requires a radical shift in how content is produced and distributed. Here are several strategies being employed by resilient

Uncategorized

ChatGPT ads pilot leaves advertisers without proof of ROI

The Dawn of AI Advertising and the Measurement Gap For nearly two years, the digital marketing world has buzzed with anticipation and apprehension regarding how OpenAI would eventually monetize its flagship product, ChatGPT. As the platform surged to hundreds of millions of active users, the transition from a subscription-only model to an ad-supported ecosystem seemed inevitable. However, the initial rollout of the ChatGPT ads pilot has been met with a surprising realization: one of the most advanced technology companies in human history is currently offering an advertising product that feels like a relic from a different era. Recent reports indicate that while OpenAI is aggressively moving forward with its advertising ambitions, early adopters are finding themselves in a difficult position. The primary grievance among brand managers and agency executives is a fundamental lack of proof regarding Return on Investment (ROI). In an age where digital marketing is defined by granular data, real-time attribution, and algorithmic optimization, the ChatGPT ads pilot currently operates within a “black box” that leaves advertisers guessing whether their spend is actually driving business growth. The Reality of the ChatGPT Ads Pilot According to reports from The Information and insights shared by SEO consultant Glenn Gabe, the initial pilot program for ChatGPT ads is remarkably primitive. Advertisers entering this space are not meeting a sophisticated ad manager interface like those provided by Google or Meta. Instead, they are encountering a manual, labor-intensive process that lacks the basic infrastructure required for modern performance marketing. Currently, the “big picture” for ChatGPT’s ad product is one of limited visibility. The platform shares almost no actionable data with its partners. There are no automated buying tools, meaning that transactions aren’t happening through a programmatic bidding system. Instead, deals are being brokered through a series of phone calls, email chains, and shared spreadsheets. This manual approach is a far cry from the instantaneous, data-driven auctions that define the rest of the digital advertising landscape. Challenges Facing Early Adopters For the agencies and brands that have participated in the pilot, the experience has been a lesson in frustration. Several key obstacles have emerged that make it nearly impossible to justify long-term spending on the platform at this stage: Lack of Automated Infrastructure: Without a self-service dashboard or automated API for ad placement, the process of launching and managing campaigns is inefficient. This prevents brands from scaling their efforts or making real-time adjustments based on performance. Missing Performance Data: Advertisers thrive on metrics. They need to know click-through rates (CTR), conversion rates, cost-per-acquisition (CPA), and customer journey mapping. Reports suggest that OpenAI provides minimal data, making it impossible to evaluate outcomes with any degree of certainty. Inability to Prove Results: Two agency executives speaking to The Information noted that they were unable to provide their clients with definitive proof that ChatGPT ads drove any measurable business results. Without this proof, the “experimental” budget quickly dries up. The Irony of Advanced AI and Spreadsheet-Era Reporting There is a profound irony in the current state of OpenAI’s advertising business. OpenAI has pioneered the most sophisticated Large Language Models (LLMs) in the world, capable of writing code, composing poetry, and solving complex reasoning problems in seconds. Yet, when it comes to the business side of their platform—specifically the reporting and analytics for their ad partners—they appear to be stuck in the “spreadsheet era.” This disconnect highlights a common growing pain for technology-first companies. Building a world-class consumer product is not the same as building a world-class advertising platform. Google and Meta spent decades refining their tracking pixels, attribution windows, and reporting dashboards. OpenAI is attempting to bridge that gap in a matter of months, and the cracks are beginning to show. For the time being, the sophisticated AI under the hood of ChatGPT is not being utilized to help advertisers understand their audience or the impact of their creative assets. Scaling to Millions: The Expansion Plans Despite these early teething problems, OpenAI is not slowing down. The company has informed advertisers of its intention to scale ads to all U.S. users on the free and low-cost ChatGPT tiers in the coming weeks. This represents a massive expansion of inventory. Millions of additional eyeballs will soon see sponsored content within their chat interfaces. OpenAI’s advice to advertisers to improve performance in the meantime is relatively simple: supply more variations of text and visual creative. The theory is that more variety will allow the system to better match content to user queries. However, without the data to show which variations are actually working, advertisers are essentially doubling down on a “spray and pray” strategy, hoping that something sticks without ever being able to confirm what it was. The Risks of Scaling Without Measurement Expanding an ad product before the measurement tools are ready is a risky move. While it allows OpenAI to start capturing revenue immediately, it risks alienating the very brands it needs to build a sustainable ecosystem. If a brand spends $100,000 on ChatGPT ads and cannot see a single conversion or meaningful engagement metric, they are unlikely to return for a second campaign. For the digital marketing community, this expansion signals a transition from a closed pilot to a broader “beta” phase. While the audience size is growing, the maturity of the product is not yet matching that scale. Advertisers are being asked to pay for reach while being denied the tools to measure the value of that reach. Why Digital Marketers Should Care For SEO professionals, digital marketers, and brand stakeholders, the ChatGPT ads saga is a cautionary tale about the “shiny object” syndrome. The allure of being “first” on a platform as revolutionary as ChatGPT is strong, but it comes at a significant cost. If you are considering ChatGPT as a new ad channel, you must understand the current limitations. Spending Blind In the current state of the pilot, you are essentially spending blind. There is no reliable way to prove ROI to stakeholders. In an era where marketing budgets are under constant

Uncategorized

Why zero-click search doesn’t mean zero influence

Why zero-click search doesn’t mean zero influence The digital marketing landscape is currently navigating one of the most significant structural shifts since the invention of the search engine. During a recent keynote at the Industrial Marketing Summit, SparkToro co-founder Rand Fishkin reignited a long-standing debate by arguing that we are now firmly operating in a “zero-click world.” On the surface, the data supports this: a massive percentage of Google searches now end without a single click to an external website. Between featured snippets, local map packs, and the rapid rollout of AI Overviews, the search engine results page (SERP) has transformed from a list of doorways into a destination in its own right. For many SEOs and digital publishers, this trend feels like an existential threat. If users are finding their answers directly on Google, Reddit, or through a ChatGPT prompt, the traditional value proposition of a website—as a driver of measurable traffic—seems to be evaporating. However, looking only at click-through rates (CTR) provides a narrow and increasingly inaccurate view of how digital influence actually works in the modern era. The deeper reality is that while clicks may be declining, the structural importance of high-quality, original content is actually increasing. To understand why zero-click search doesn’t mean zero influence, we have to look past the surface-level metrics and examine how information is evaluated, synthesized, and trusted across the modern web ecosystem. In this new environment, websites are no longer just destinations; they are the fundamental training data and authority signals that power the entire AI-driven information pipeline. Why ‘zero-click’ discussions often lead to the wrong conclusion From a purely analytical perspective, the zero-click trend is undeniable. Search engines have evolved to prioritize user convenience, which often means answering a query as quickly as possible. If a user wants to know the “best time to plant tomatoes in Zone 7,” Google provides a direct answer. If they want to know a company’s stock price or the result of last night’s game, the data is presented instantly. The user is satisfied, but the publisher receives no visit. The rise of AI assistants and large language models (LLMs) has accelerated this. These tools synthesize answers from dozens of sources, presenting a cohesive narrative that removes the need for the user to visit individual links. This shift disrupts the traditional “traffic-first” model of SEO that has dominated the industry for over twenty years. When visibility no longer translates into a visit recorded in GA4, many marketers conclude that the website matters less. This is a fundamental miscalculation. The conclusion that websites are losing importance is an incomplete assessment of the information ecosystem. Large language models and AI-driven search interfaces do not create knowledge out of thin air; they rely on probabilistic signals drawn from the open web. They evaluate truth through consistency and authority. When a brand’s message appears consistently across multiple independent, high-quality sources, the statistical likelihood that the information is correct—and therefore worth repeating—increases. In this context, visibility is no longer just about the click; it is about being the “source of truth” that the AI chooses to relay. The evolution of visibility signals Historically, we used traffic to forecast performance. If we ranked for a keyword with 10,000 monthly searches and had a 10% CTR, we knew we’d get 1,000 visits. In a zero-click world, that math breaks. However, the influence remains. If 10,000 people see your brand name cited as the authority in an AI Overview, your brand has still gained 10,000 impressions of high-intent authority. This “invisible” visibility shapes consumer perception and feeds the top of the funnel in ways that traditional analytics struggle to capture. Fishkin is right about the trend Rand Fishkin’s observation about the “fragmentation of discovery” accurately describes the modern user journey. We no longer live in a world where search begins and ends with a blue link. Information consumption is now distributed across a massive variety of environments: AI Overviews: Search engines synthesize complex answers at the top of the page. Social Discovery: Platforms like TikTok and LinkedIn have become research engines where users search for product reviews or professional advice. Community Forums: Reddit and Discord act as bastions of human-first, experiential knowledge that AI often prioritizes. Vertical Search: Amazon for products, YouTube for “how-to” content, and specialized industry databases. When a user encounters a professional insight on LinkedIn or a product recommendation in a Reddit thread, they may never visit the original creator’s website. From a traditional analytics standpoint, this looks like a failure or a lost opportunity. But from a brand perspective, it is a successful touchpoint. The underlying knowledge that fueled that Reddit conversation or LinkedIn post had to originate somewhere. The environments where people consume information are expanding, but the demand for primary, authoritative data has never been higher. Zero-click doesn’t mean zero influence To succeed in the current landscape, marketers must understand the critical distinction between traffic and information influence. While traffic measures whether a user landed on your URL, influence measures whether your expertise shaped the answer the user received, regardless of where they saw it. AI systems are essentially advanced pattern-matching engines. When an LLM answers a question about a technical concept, a legal strategy, or a marketing tactic, it isn’t “thinking.” It is constructing a response based on patterns learned from the web. It draws on the analysis, explanations, and original thought leadership that publishers have placed online. If your website is the primary source of a specific methodology or a unique set of data, the AI will use your “information fingerprint” to construct its answer. Even in a zero-click environment, those primary sources are the anchors of the ecosystem. Influence occurs earlier in the pipeline. If a user asks an AI, “What is the best way to scale a SaaS business?” and the AI uses your framework to answer, you have influenced that user’s strategy. They now associate your concepts with the solution to their problem. While you didn’t get the click today, you

Uncategorized

Why ‘search everywhere’ is the new reality for SEO

Why ‘search everywhere’ is the new reality for SEO For decades, the search engine optimization industry has been defined by a single, monolithic goal: ranking on the first page of Google. Marketers obsessed over the “ten blue links,” fine-tuning meta tags and backlink profiles to appease a single algorithm. However, the digital landscape has undergone a seismic shift. Today, the most pressing conversations in SEO circles revolve around Artificial Intelligence (AI)—specifically the rise of AI Overviews, ChatGPT, and large language models (LLMs). There is a palpable fear that these generative technologies are cannibalizing traffic, forcing brands to pivot toward Generative Engine Optimization (GEO) or Answer Engine Optimization (AEO). While the concern regarding AI-driven traffic loss is statistically valid—particularly for informational, top-of-funnel content—it masks a much larger and more fundamental change in human behavior. The real evolution isn’t just about how AI interprets data; it is about where users are going to find information in the first place. User behavior has fragmented across a dozen different ecosystems, from social media to retail giants. We have entered an era where “search everywhere” is no longer a luxury or a niche strategy; it is the new reality for digital survival. The Fragmentation of the Modern Search Journey The traditional search funnel used to be linear: a user had a problem, they went to Google, they clicked a link, and they found a solution. That journey has been shattered. Today, discovery happens in real-time, across platforms that were never originally intended to be search engines. When a user wants to find a new restaurant, they search TikTok to see the ambiance and the food in motion. When they need to fix a broken appliance, they head to YouTube for a visual tutorial. When they want an unbiased review of a tech product, they append “Reddit” to their query or search the platform directly to avoid the polished marketing fluff of corporate websites. And when they are ready to buy, they often bypass search engines entirely, starting their journey on Amazon. This shift represents more than just a change in habit; it is reflected in hard traffic data. Recent research, including an analysis of 41 websites with significant search activity by SparkToro and Datos, highlights a startling trend. In Q4 of 2025, platforms like Amazon and YouTube continued to drive significantly more desktop traffic and search activity than ChatGPT. While LLMs are growing, they are not yet the primary disruptors of traditional search—fragmentation across specialized platforms is. Rethinking the Competitive Landscape One of the biggest mistakes a modern brand can make is assuming their only competitors are the companies selling the same products or services. In a “search everywhere” world, your competitors are often content creators, community hubs, and media platforms that occupy the digital real estate your audience frequents. In a recent share of voice analysis conducted for a major client, the objective was to identify who was winning in traditional search across multiple service lines and to map out a content roadmap to fill those gaps. The results were eye-opening. While the client expected to see their direct business rivals at the top of the list, the analysis revealed that their biggest competitors for visibility were actually YouTube and Reddit. These third-party platforms are not just “social sites”; they are search powerhouses that rank exceptionally well in traditional Search Engine Results Pages (SERPs). They take up valuable real estate, such as video carousels and “Discussions and Forums” modules. When a user clicks a Reddit thread or a YouTube video from a Google result, they are funneled away from the traditional web and into a proprietary ecosystem. If your brand does not have a presence on these platforms, you are effectively invisible to a massive segment of your target market, regardless of how well your website’s blog is optimized. The Power of In-Platform Search Volume Understanding the “search everywhere” reality requires looking beyond Google’s keyword tools. Depending on the intent behind a query, there may be far more search volume occurring within a specific platform than on all traditional search engines combined. This is particularly true for “how-to” and educational content. Take, for example, the query “how to fix a leaky sink faucet.” Data from tools like Semrush and vidIQ suggest that this specific term can have up to 15 times more search volume on YouTube than on traditional search engines globally. For a homeowner standing in a puddle of water, a 1,500-word blog post is less helpful than a three-minute video showing exactly which wrench to use and which direction to turn it. The takeaway for SEOs is clear: if your content strategy is restricted to text-based articles, you are capping your potential reach. To be truly “search everywhere” friendly, a holistic approach is required. For a topic like home repair, the strategy should involve creating a high-quality YouTube video and then embedding that video within a comprehensive blog post. This allows you to capture traffic from YouTube’s internal search, Google’s video carousels, and traditional organic listings simultaneously. The Influence of Social Platforms on AI Citations The “search everywhere” phenomenon also dictates how AI models like ChatGPT, Claude, and Gemini perceive your brand. LLMs do not generate answers in a vacuum; they synthesize information from a vast web of data. Crucially, they do not just look at your own website to understand who you are or what you do. In fact, they often prioritize third-party sources to establish a “consensus.” AI visibility tools provide a window into how these citations work. In multiple analyses of major brands, a consistent pattern emerges: a very small percentage of AI citations (often less than 10%) come from the brand’s own website or those of its direct competitors. Instead, nearly 90% of citations originate from: Third-party news and online publications. Social media platforms (LinkedIn, X, TikTok). Forum platforms like Reddit and Quora. Niche review sites and industry aggregators. This creates a new challenge for SEOs: the “Consensus Layer.” If you want an AI to recommend your

Uncategorized

AI is squeezing marketing agencies from both sides

The digital marketing landscape is currently navigating a period of profound transformation, fueled by the rapid integration of artificial intelligence. While the early days of the AI boom were filled with promises of unprecedented efficiency and improved profit margins, the reality hitting agency owners in 2025 is far more complex. Instead of a golden age of productivity, many agencies find themselves caught in a vice. They are being squeezed from both sides: by the very technology they adopted to save time and by clients who now view that same technology as a reason to pay less. The numbers reflected in recent industry research tell a sobering story of rising anxiety. According to SparkToro’s annual State of Digital Agencies survey, which gathers insights from hundreds of agency owners globally, the perception of AI as a threat is accelerating. In 2024, 44% of digital marketing agencies viewed AI as a significant threat to their business model. By 2025, that number surged to 53%. This shift indicates that the “wait and see” approach has evaporated, replaced by a tangible struggle for survival in a commoditized market. The Efficiency Paradox: Why Saving Time Isn’t Saving Margins When generative AI tools like ChatGPT, Claude, and Midjourney first became mainstream, the value proposition for agencies seemed obvious. If a junior copywriter took four hours to draft a blog post and a bot could do it in four seconds, the agency could theoretically produce ten times the content with the same headcount. This “promise of efficiency” was supposed to be a boon for agency margins. The plan was simple: automate the repetitive, low-level tasks—such as keyword research, initial drafting, performance reporting, and basic ad copy variations—and pocket the difference. However, this strategy relied on one critical assumption: that clients wouldn’t notice or wouldn’t care. That assumption proved to be a massive miscalculation. Clients are now performing the same math. They have access to the same tools and are being bombarded by “AI-first” marketing narratives. When a brand realizes that an agency is using automation to handle 70% of the workload, they naturally begin to question the traditional retainer model. If the work is faster and easier to produce, the client demands that those cost savings be passed on to them. This has led to a “race to the bottom” in pricing for execution-heavy services. The Squeeze from the Client Side: In-Housing and Budget Cuts Agencies are not just competing against each other anymore; they are competing against their own clients’ internal capabilities. As AI lowers the barrier to entry for technical marketing tasks, more brands are bringing work in-house. Tasks that once required a specialized agency team can now be handled by a single internal marketing generalist armed with a suite of AI tools. Al Sefati, CEO of Clarity Digital Agency, has observed this trend firsthand. He notes that several services agencies once charged a premium for are now performed internally or through specialized automation software. This shift has turned previously high-margin offerings into commodities. Sefati points out that even when performance metrics are strong, clients are increasingly prone to “putting marketing on pause” or backing out of contracts due to broader economic uncertainty and the belief that they can maintain a baseline level of activity themselves using AI. When budgets get tight, the agency is often the first line item to be scrutinized. If the agency’s primary value is “execution,” and AI can execute, the agency becomes expendable. This pressure is particularly acute for boutique agencies that lack the scale to offer deep strategic consulting or proprietary technology. The Lengthening Sales Cycle and the Demand for ROI The uncertainty surrounding AI’s role in marketing has also had a chilling effect on the sales process. SparkToro’s research highlights a significant lengthening of sales cycles. In 2024, many agencies could close deals within a month. In 2025, a growing number of agencies report that deals are taking 7-8 weeks, or even upwards of 12 weeks, to finalize. Prospects are hesitant to commit to long-term retainers because they are waiting to see how AI will further disrupt the space. They are asking harder questions during the procurement phase: “How much of this is being done by humans?” and “If you use AI, why does it cost this much?” Furthermore, the expectation for results has reached an all-time high. In an era where data is more accessible than ever, “progress” is no longer a valid metric. Brands are demanding tangible business outcomes—revenue attribution, pipeline impact, and a clear return on ad spend (ROAS). The fluff has been stripped away, leaving agencies to prove their worth in cold, hard numbers while their fees are being pushed downward. The Hidden Crisis: A Hollowing Out of Junior Talent Perhaps the most long-term damaging aspect of the AI squeeze is the threat to the talent pipeline. The SparkToro survey revealed that 66% of agency owners are worried that junior team members will have fewer career opportunities in the future. This isn’t just a concern about entry-level unemployment; it’s a concern about the future of marketing expertise. Historically, agencies functioned as the ultimate training ground. Junior staff members would spend years “in the weeds”—doing the repetitive work of keyword mapping, manual reporting, and drafting hundreds of ad variations. These tasks were often tedious, but they provided the foundational knowledge necessary to become a senior strategist. You can’t lead a high-level SEO strategy if you don’t truly understand how search intent relates to on-page content. AI is now automating exactly these “training ground” tasks. If an agency uses AI to handle all the foundational work, the junior staff has nothing to do. If there are no junior staff, there is no one to eventually replace the senior strategists. This creates a “talent gap” where agencies may soon find themselves with a few highly paid, aging experts and a void of middle-management talent who knows how to actually do the work. The industry risks hollowing itself out from the bottom up. What AI Cannot Replace: The

Uncategorized

Duplicate website stats appear in Google paid search ads

The Growing Concern Over Data Accuracy in Google Paid Search In the highly competitive world of digital marketing, trust is the ultimate currency. When a user enters a query into Google, they are met with a mix of organic results and paid advertisements. For years, Google has bolstered the credibility of these paid ads by integrating “trust signals”—small snippets of data such as customer ratings, seller reviews, and website statistics. These signals are designed to help users distinguish between a reputable brand and a less established one, ultimately driving higher click-through rates (CTR) for advertisers. However, a recent and highly unusual phenomenon has been spotted within the Google Ads ecosystem. Multiple competing ads, representing entirely different businesses and domains, have begun displaying identical website statistics simultaneously. This anomaly was first brought to public attention by Anthony Higman, a well-known paid media expert and the founder of Adsquire. Higman’s discovery, shared via LinkedIn, has sent ripples through the Search Engine Marketing (SEM) community, raising urgent questions about whether this is a technical glitch, an intentional UI test, or a deeper shift in how Google handles transparency. Understanding the Anomaly: What are Duplicate Website Stats? Website statistics in Google Ads typically appear as automated assets or extensions. These might include data points like the number of visitors a site receives, the number of successful transactions, or other quantitative measures of a brand’s reach. Usually, these numbers are unique to the advertiser. For example, a global retail giant would expectedly show significantly higher visitor counts than a local boutique. The value of these stats lies in their specificity; they provide a factual basis for a user to trust one ad over another. The issue recently identified involves instances where two or more ads appearing on the same Search Engine Results Page (SERP) feature the exact same statistical figures. When a user sees two different insurance companies or two different software providers claiming the exact same “millions of users” or “site visits” in a standardized format provided by Google, the data loses its perceived authenticity. It suggests that the numbers are either being pulled from a shared (and likely incorrect) data pool or that Google’s system is failing to distinguish between the unique data signatures of individual advertisers. Why Trust Signals Matter in Paid Search To understand why this discovery is so concerning for digital marketers, one must look at the psychology of the searcher. Paid search ads are often viewed with a degree of skepticism by savvy internet users. To combat this, Google introduced ad assets (formerly extensions) to provide more context and social proof. These include: Seller Ratings: Star ratings that reflect the overall consumer experience with a merchant. Callouts: Short snippets highlighting specific benefits like “Free Shipping” or “24/7 Support.” Structured Snippets: Lists of products or services offered. Website Statistics: Data-driven metrics that showcase the scale or popularity of a website. When these signals are accurate, they act as a “seal of approval.” A high visitor count or a large number of satisfied customers tells the user that the site is safe and reliable. However, if those signals appear duplicated across competitors, the user’s internal “BS detector” is triggered. Instead of building trust, the ads begin to look like generic templates. This can lead to a phenomenon known as “banner blindness,” where users subconsciously ignore these trust signals because they no longer believe they represent reality. Is it a Bug, a Test, or a Shift in Strategy? At this stage, Google has not released an official statement regarding the appearance of duplicate website stats. This leaves the industry to speculate on three primary possibilities. 1. A UI Display Bug The most likely explanation, according to many experts, is a technical glitch in how Google’s front-end displays automated assets. Google Ads is an incredibly complex system that uses machine learning to decide which assets to show for any given query. It is possible that a bug in the rendering engine is causing it to default to a “cached” or “template” value when it fails to fetch the unique data for a specific advertiser. If the system cannot find the specific visitor count for “Company A,” it might accidentally pull the data it just fetched for “Company B.” 2. An Unannounced A/B Test Google is notorious for “testing in production.” It is possible that Google is experimenting with generic industry benchmarks rather than specific website stats. For instance, they might be testing whether showing a general “industry standard” number (e.g., “Used by 1M+ professionals in this field”) is more effective than showing a site-specific number. If this is the case, the duplication isn’t a bug but a feature designed to see if generalized trust signals can drive similar CTRs to specific ones. 3. Data Aggregation Errors Another possibility is that the data source itself is flawed. Google pulls statistics from various places, including Google Analytics (if linked), the Google Merchant Center, and third-party data aggregators. If there is a “collision” in how these data points are indexed, it could lead to multiple domains being associated with the same set of statistics. This would be a significant concern for data privacy and accuracy, as it implies a breakdown in the firewall between different advertisers’ performance data. The Impact on Advertiser Performance and Spend For the advertisers themselves, this issue is more than just a visual oddity; it has direct financial implications. Paid search is a game of margins. Advertisers bid on keywords with the expectation that their ad’s quality and relevance will lead to a conversion. If Google’s UI makes an ad look untrustworthy by displaying duplicate or clearly incorrect statistics, several things happen: Decreased Click-Through Rate (CTR): If users perceive the ad as “fake” or the data as “canned,” they are less likely to click. A lower CTR leads to a lower Quality Score, which in turn increases the Cost Per Click (CPC) the advertiser must pay to maintain their position. Brand Dilution: For established brands, having their unique achievements mirrored by

Scroll to Top