Uncategorized

Uncategorized

Google Responds To Error That Causes Old Branding To Persist In SERPs via @sejournal, @martinibuster

The Persistence of Legacy Branding in Search Results For any business, a rebrand is a monumental undertaking. It involves a shift in visual identity, core messaging, and often a complete overhaul of the digital footprint. In an ideal world, once the new brand is launched and the website updated, search engines would immediately recognize the change and reflect it in the Search Engine Results Pages (SERPs). However, the reality of SEO is often far more complex. Recently, a particularly frustrating scenario caught the attention of the SEO community. A website owner reported that even though their site had rebranded over a decade ago, Google was still displaying the old, outdated branding in its search results. This issue highlights a significant challenge in technical SEO: how do you convince an algorithm with a “long memory” that your identity has permanently changed? Google’s John Mueller recently addressed this specific concern, providing insight into why these errors occur and what site owners can do to rectify a situation where the past refuses to stay in the past. Understanding the mechanics behind how Google identifies a brand name is essential for any digital marketer or business owner navigating a transition. John Mueller on the Challenge of Outdated Branding When a website undergoes a rebranding process, the expectation is that Google will crawl the new pages, see the updated title tags, and adjust the SERP snippets accordingly. However, as Mueller explained, Google’s systems do not rely on a single data point to determine what a site should be called in search results. Instead, it uses a variety of signals gathered from across the web. The case in question involved a brand that had moved on ten years prior. Despite the passage of a decade, the legacy name persisted. Mueller noted that while Google tries to be as dynamic as possible, certain automated systems can become “stuck” on historical data if the signals provided to the search engine are inconsistent or if old signals remain overwhelmingly strong. This persistence isn’t necessarily a bug in the traditional sense, but rather a byproduct of how Google’s “Site Name” system works. This system is designed to provide users with a clear, recognizable name for a website, which often differs from the specific <title> tag of an individual page. When the system encounters conflicting information, it may default to the name it has the most “confidence” in—which, in some cases, happens to be the old branding. How Google Determines Site Names To understand why old branding persists, we must look at the specific signals Google uses to generate site names in the SERPs. Introduced and refined over the last few years, the “Site Name” feature is distinct from the page title. Google uses several sources to determine this name: 1. WebSite Structured Data The primary way Google encourages site owners to define their preferred name is through Schema.org structured data. Specifically, the “name” property within the WebSite structured data type. If this is missing or incorrectly configured, Google is left to guess based on other on-page and off-page elements. 2. Title Tags and H1 Headings While the site name system is automated, it still heavily weighs the content found in the <title> tag of the homepage and the main H1 heading. If a site rebrands but neglects to update these fundamental elements across the entire domain, Google will receive mixed signals. 3. Internal Link Anchor Text Google looks at how a site refers to itself. If internal links—such as those in the footer or the “About Us” section—still use the old brand name as anchor text, the algorithm may conclude that the old name is still the authoritative one. 4. External Citations and Backlinks This is often where the “ten-year lag” comes into play. If a site was well-established under its old name, it likely has thousands of backlinks from other websites using the old name as anchor text. Furthermore, business directories, Wikipedia entries, and news articles may still reference the legacy brand. If these external signals are not updated, Google’s Knowledge Graph may continue to associate the domain with the old identity. Why Ten Years Isn’t Always Enough for an Automatic Update One might assume that ten years of fresh content would be enough to drown out the past. However, Google’s algorithms are designed for stability. If a site was an authority in its niche for 20 years under “Brand A” and then changed to “Brand B,” the historical weight of “Brand A” is massive. In the case Mueller discussed, the persistence of the old brand suggests that there are still significant “hooks” in the digital ecosystem pointing to the former name. This could be due to legacy subdomains that were never redirected, old image alt-text that remains unchanged, or a failure to update the organization’s structured data to reflect the name change. When the automated system for Site Names runs, it weighs all available data. If the “old” data still carries significant authority, it can override the “new” data. Technical Steps to Fix Persistent Branding Errors If you find yourself in a situation where Google is displaying an outdated brand name, a systematic approach is required to provide Google with the clarity it needs. Here are the steps John Mueller and SEO best practices suggest: Audit Your Structured Data Ensure that your homepage contains the “WebSite” structured data. This is no longer optional for brands that want to control their SERP appearance. The markup should look something like this: { “@context”: “https://schema.org”, “@type”: “WebSite”, “name”: “Your New Brand Name”, “url”: “https://www.yourdomain.com/” } Google also supports the “alternateName” property, which can be useful if your brand is commonly known by an acronym or a shorter version of the full name. Once updated, use the Rich Results Test tool to ensure Google can read the markup correctly. Check the ‘Organization’ Schema While “WebSite” schema handles the site name in SERPs, “Organization” schema helps inform the Knowledge Graph. Ensure your Organization markup reflects the new name, new logo, and updated

Uncategorized

Google Search Live expands globally where AI Mode is available

The Global Arrival of Conversational Search Google has officially announced the global expansion of Google Search Live, bringing its most advanced conversational AI search experience to more than 200 countries and territories. This move represents a significant shift in how users interact with information, moving away from the traditional model of typing queries into a search bar and toward a seamless, real-time dialogue. The expansion is specifically targeted at regions and languages where Google’s AI Mode is already supported, marking a major milestone in the company’s efforts to integrate generative artificial intelligence into the core of its consumer products. Google Search Live is designed to provide a more intuitive way to explore the web. Rather than receiving a static list of blue links, users can engage in a back-and-forth conversation with an AI agent that understands context, maintains the thread of a discussion, and provides verbal responses alongside web-based citations. This global rollout signals that the experimental phase of conversational search is coming to an end, as Google positions these tools as the new standard for digital exploration. Powered by Gemini 3.1 Flash Live The backbone of this global expansion is Google’s latest audio and voice model, Gemini 3.1 Flash Live. This specific iteration of the Gemini model family is optimized for speed, low latency, and natural language processing. In the world of conversational AI, “latency” is the enemy of a good user experience; if an AI takes several seconds to process a voice command, the conversation feels robotic and disjointed. Gemini 3.1 Flash Live solves this by delivering near-instantaneous responses that mimic human conversational cadences. Google notes that this new model is inherently multilingual. This is a critical development for a global rollout, as it allows the system to understand and respond in various languages and dialects without losing the nuance of the user’s intent. For users in the 200+ supported countries, this means they can speak to Google Search in their preferred language and receive answers that feel localized and contextually relevant. The model’s ability to handle complex, multi-part questions verbally is a direct result of the improvements made in the Flash architecture, which prioritizes efficiency without sacrificing the depth of information retrieval. How to Access and Use Google Search Live Integrating Search Live into your daily routine is straightforward, provided you have the latest version of the Google app. The feature is available on both Android and iOS platforms, ensuring parity across the mobile ecosystem. To begin a session, users simply need to open the Google app and look for the “Live” icon situated directly under the main Search bar. Tapping this icon activates the microphone and transitions the interface into a dedicated conversational mode. Once inside the Live interface, the experience is largely hands-free. You can ask a question out loud—ranging from complex philosophical inquiries to simple weather updates—and receive an audio response. The real power of the tool lies in its ability to handle follow-up questions. For instance, if you ask about the best time to visit Tokyo, you can immediately follow up with, “What about the weather during that time?” without having to specify you are still talking about Tokyo. The AI maintains the context of the conversation, allowing for a deep dive into specific topics. For those who prefer a hybrid experience, the interface also provides helpful web links. While the AI speaks the answer, the screen populates with citations and resources that allow the user to verify information or explore the topic further. This ensures that the transparency of the web remains a core component of the search experience, even as the primary interaction method shifts to voice. Enhancing Search with Visual Context One of the most impressive features included in the global expansion is the ability to use visual context to inform a search query. By enabling the camera within the Search Live interface, users can effectively show Google what they are looking at. This multimodal approach bridges the gap between the physical world and digital information. Consider a scenario where you are trying to assemble a piece of furniture or repair a household appliance. Instead of trying to describe a specific screw or a complex mechanical part using words, you can simply point your camera at the object and ask, “How do I install this?” Google Search Live analyzes the video feed in real-time, identifies the components, and provides step-by-step verbal instructions alongside relevant web links or video tutorials. This feature is also accessible through Google Lens; by selecting the “Live” option at the bottom of the Lens screen, users can engage in a real-time conversation about the objects, text, or landmarks visible through their viewfinder. The Evolution of Voice Interaction at Google The global launch of Google Search Live is the culmination of years of iterative development. The technology has evolved through several distinct phases, each adding a layer of sophistication to the user experience. Initially, the concept began as an opt-in beta known as “Talk and Listen.” This early version was primarily focused on basic voice recognition and text-to-speech responses, lacking the fluidity of a true conversation. In September, Google launched Search Live with video capabilities in the United States, allowing users to test the multimodal features that are now going global. Before the introduction of video, the tool was largely restricted to audio-only interactions. The transition from “Talk and Listen” to a full-fledged “Live” experience reflects Google’s broader strategy: moving away from reactive tools (where the user asks and the AI answers) toward proactive assistants (where the user and the AI collaborate in real-time). What This Means for SEO and Digital Publishers For SEO professionals, digital marketers, and content creators, the global expansion of Google Search Live introduces new challenges and opportunities. The most immediate concern for many publishers is the potential for “zero-click” searches to increase. When a user can get a comprehensive, narrated answer directly within the Google app, the incentive to click through to a website decreases significantly. This is especially true for

Uncategorized

Google PMax gets new exclusions, expanded reporting features

The Evolution of Performance Max: From Black Box to Steerable Automation Since its wide release in late 2021, Google’s Performance Max (PMax) has been a polarizing topic in the digital marketing world. On one hand, it offers an unparalleled ability to reach audiences across the entire Google ecosystem—Search, YouTube, Display, Discover, Gmail, and Maps—using a single campaign. On the other hand, seasoned media buyers have long criticized it for being a “black box,” offering limited transparency and few “levers” to pull when performance deviates from expectations. Google is now addressing these long-standing concerns with a suite of new updates designed to give advertisers more control over their spend and better visibility into their results. The latest announcement introduces first-party audience exclusions, expanded reporting for demographics, network segmentation for placements, and advanced budget forecasting tools. These updates represent a significant shift in Google’s philosophy, moving away from purely automated “hands-off” advertising toward a “steerable AI” model that prioritizes human strategic input. First-Party Audience Exclusions: Optimizing for New Customer Acquisition One of the most requested features for Performance Max has been the ability to accurately exclude existing customers. While PMax has always focused on driving conversions, it hasn’t always been easy to distinguish between a conversion from a loyal, long-time customer and a conversion from someone completely new to the brand. With the introduction of first-party audience exclusions, advertisers can now upload their own customer lists (Customer Match) and explicitly tell the PMax algorithm to ignore these individuals. This is a game-changer for businesses focused on aggressive growth and net-new customer acquisition. The Problem with Repeat Conversions in PMax In the past, PMax campaigns often focused on “low-hanging fruit.” If the algorithm identified that an existing customer was likely to buy again, it would serve them an ad to secure that conversion. While this looks great on a spreadsheet in terms of Return on Ad Spend (ROAS), it often fails the “incrementality” test. If a customer was already going to buy, paying for a click to facilitate that purchase is often a waste of marketing budget. Driving Down Customer Acquisition Cost (CAC) By using the new audience exclusion features, brands can ensure that every dollar spent on PMax is going toward finding someone who has never interacted with the brand before. This allows for a much cleaner calculation of Customer Acquisition Cost (CAC). By removing existing customers from the equation, the data fed back into the machine learning model becomes more refined, teaching the AI to look for profiles that resemble prospects rather than current users. Full Audience Reporting: Transparency in Demographics Transparency has been the primary battleground for PMax users. For years, advertisers had to guess who exactly was seeing their ads. While “Audience Signals” allowed users to suggest who the AI should target, the reporting on who actually converted was often opaque. Google is now expanding audience reporting to include detailed breakdowns by age and gender. This level of granularity allows advertisers to see exactly which demographic segments are driving the most value and, conversely, which segments are consuming budget without delivering results. Refining Creative Strategy Through Data Demographic reporting does more than just show who clicked; it informs the entire creative process. If the data shows that a campaign is performing exceptionally well with women aged 25–34 but poorly with men of the same age, the advertiser can make a strategic decision. They might choose to create specific video assets for YouTube that speak more directly to the high-converting demographic or adjust their messaging to better resonate with the underperforming group. Validation of Audience Signals This update also provides a way to validate the “Audience Signals” provided at the start of a campaign. If you told Google to target “Outdoor Enthusiasts” but the reporting shows your ads are primarily being served to a demographic that doesn’t fit that profile, you can adjust your signals or your creative assets to get the campaign back on track. It turns PMax from a “set it and forget it” tool into a diagnostic tool for market research. Network Segmentation: Understanding Placement Performance One of the biggest anxieties for brand managers using Performance Max is “where” their ads are showing. Because PMax spans so many different networks, there is always a risk that ads might appear on low-quality websites or in environments that don’t align with the brand’s image. Previously, the “placement report” was somewhat limited, making it difficult to see the performance split between the Search network, YouTube, and the Display network. Google’s new update allows for network segmentation within the “When and where ads showed” report. This means advertisers can finally see a breakdown of how their ads are performing on a network-by-network basis. Protecting Brand Safety Brand safety is a top priority for enterprise-level advertisers. The ability to segment placements by network allows for a more rigorous audit of where the budget is going. If an advertiser notices that a large portion of their spend is being diverted to the Google Display Network (GDN) with a high bounce rate and low conversion rate, they now have the data to back up a request for account-level exclusions or a shift in strategy. Optimizing for Different User Mindsets Users behave differently depending on which Google property they are using. A user on Search has high intent; they are looking for a specific solution. A user on YouTube might be in a “discovery” or “entertainment” mindset. By seeing which networks are driving the best performance, advertisers can tailor their expectations and their ROAS targets more accurately. For example, if YouTube is driving high-funnel awareness but low direct conversions, the advertiser can value those impressions differently than a direct-response Search click. Budget Reporting and Forecasting Tools Managing spend in an automated environment can be a volatile experience. Performance Max is notorious for its daily spend fluctuations, as the algorithm aggressively pursues opportunities when it identifies high-intent traffic. This can make it difficult for media buyers to stay within a strict monthly budget or to

Uncategorized

Automated traffic is growing 8x faster than human traffic: Report

The Rapid Shift in Web Demographics The landscape of the internet is undergoing a fundamental transformation. For decades, the web was built by humans, for humans. However, a landmark report from HUMAN Security titled the “2026 State of AI Traffic” reveals that the digital world is rapidly being populated by non-human entities. According to the data, automated traffic grew by a staggering 23.5% year-over-year in 2025. This growth rate is nearly eight times faster than that of human traffic, which saw a modest increase of only 3.1% during the same period. This surge represents more than just a statistical anomaly; it signals a paradigm shift in how information is consumed, processed, and acted upon online. As artificial intelligence becomes more sophisticated, it is no longer just “crawling” the web to index it for search engines. It is now actively participating in the digital economy, simulating human behavior, and in many cases, making decisions on behalf of users. For digital publishers, SEO experts, and tech enthusiasts, these findings provide a critical look at a future where the majority of “visitors” to a website may not be people at all. Understanding the Anatomy of Automated Traffic To grasp why this growth is occurring so rapidly, it is essential to define what constitutes automated traffic in the current era. The HUMAN Security report defines it as all internet traffic generated by software systems rather than human users. This is a broad category that includes traditional automation—such as search engine crawlers, monitoring bots, and conventional scraping tools—as well as the newer, more complex category of AI-driven traffic. While traditional bots have been a part of the internet since its inception, the recent explosion is driven by AI agents and agentic browsers. The report highlights that AI-driven traffic volume increased by 187% year-over-year. More shockingly, traffic from specific AI agents and agentic browsers, such as OpenAI’s Atlas and Perplexity’s Comet, grew by nearly 8,000% within a single year. These are not simple scripts; they are advanced systems designed to browse the web with intent, often mimicking the navigation patterns of a human user to achieve a specific goal. The Three Pillars of AI-Driven Traffic The report categorizes AI-driven traffic into three distinct tiers, each serving a different purpose and impacting web ecosystems in unique ways: Training Crawlers: These systems are designed to collect massive datasets to train large language models (LLMs). Currently, they represent the largest share of AI traffic at 67.5%. However, their total share of the pie is actually declining. This isn’t because there are fewer training crawlers, but because other types of AI traffic are scaling at a much faster rate. Real-Time Scrapers: These are the engines behind AI-powered search and real-time answer engines. Unlike training crawlers, which gather data for future model updates, real-time scrapers fetch information “on the fly” to provide current answers to user queries. Scraper traffic grew by nearly 600% in 2025, fueled by the rising popularity of platforms that prioritize direct answers over a list of links. Agentic AI Systems: These represent the most disruptive segment of automated traffic. While still a smaller portion of the total volume, they are growing the fastest. These systems are capable of executing tasks autonomously, such as booking a flight, researching a product, or even completing a checkout process without direct human intervention at every step. AI Agents: From Data Harvesters to Autonomous Users One of the most significant takeaways from the report is how AI agents are beginning to behave like human users. In the past, a “bot” would hit a page, scrape the text, and leave. Today’s AI agents are far more sophisticated. They navigate through sales funnels, interact with search bars, and even engage with account-level features. The data from 2025 illustrates this behavioral evolution clearly. Approximately 77% of observed AI agent activity occurred on product and search pages, indicating that these agents are being used for deep research and comparison shopping. Furthermore, nearly 9% of agent interactions touched account-level features, requiring the agents to log in or navigate personalized areas of a site. Perhaps most tellingly, more than 2% of agent traffic reached the checkout flow, showing that AI is moving closer to handling financial transactions independently. This shift from “reading” to “doing” changes the stakes for e-commerce and lead generation. If an AI agent is the one making the purchase decision, the traditional psychological triggers used in web design—such as color schemes, urgent copy, or influencer testimonials—may lose their efficacy. Instead, optimization must focus on providing clear, structured data that an agent can parse and act upon efficiently. The Road to 2027: Will Bots Overtake Humans? The findings in the HUMAN Security report lend weight to a bold prediction made by Cloudflare CEO Matthew Prince. Prince recently suggested that bots could overtake human web usage by as early as 2027. Given that automated traffic is already growing eight times faster than human traffic, this timeline seems increasingly plausible. The implications of a “bot-majority” internet are profound. It suggests a future where the “Dead Internet Theory”—the idea that most online activity and content creation are already handled by AI—moves from a fringe conspiracy to a measurable reality. As AI agents become the primary way people interact with the web, the “human” part of the internet may become a smaller, curated layer on top of a massive machine-to-machine ecosystem. However, this doesn’t necessarily mean the internet will become a digital wasteland. Instead, it suggests a transition in how value is created. If machines are the primary consumers of content, the way we measure “traffic” and “engagement” must be completely reinvented. A “hit” from an OpenAI agent may be more valuable than a “hit” from a human if that agent is authorized to make a high-value purchase on behalf of a corporate client. What This Means for SEO and Digital Marketing For the SEO industry, this report is a wake-up call. The traditional playbook—optimize for Google’s algorithm to attract human clicks—is becoming incomplete. We are entering an era

Uncategorized

Google-Agent user agent identifies AI agent traffic in server logs

The landscape of the internet is shifting from a platform of information retrieval to an ecosystem of automated action. As artificial intelligence evolves from simple chatbots into autonomous agents capable of performing complex tasks, the technical infrastructure of the web must adapt to identify and accommodate these new visitors. In a significant move toward this future, Google has officially introduced a new user agent: Google-Agent. This specific identifier is designed to help webmasters and developers distinguish between traditional search engine crawlers and the emerging class of AI agents acting on behalf of human users. The rollout of Google-Agent, which began on March 20, marks a pivotal moment for technical SEO and server management. For years, server logs have been dominated by Googlebot, the tireless crawler that indexes the web for Search. However, Google-Agent represents something entirely different: a user-triggered fetcher. Understanding the nuances of this new user agent is critical for anyone managing a website, as it provides the first clear window into how AI agents are interacting with your content and completing conversions in real-time. What is Google-Agent? Google-Agent is a specialized user agent used by AI agents hosted on Google’s infrastructure. Unlike Googlebot, which crawls the web autonomously to build an index, Google-Agent is triggered by a specific request from a human user. When a person asks an AI—such as those powered by Google’s experimental Project Mariner—to perform a task that requires visiting a website, Google-Agent is the “digital representative” that makes the trip. Google classifies this under its “user-triggered fetchers” category. These are tools that only access the web when a user explicitly initiates an action. This is a fundamental distinction. While a visit from Googlebot is about discovery and indexing, a visit from Google-Agent is about utility and execution. It is the difference between a librarian cataloging a book and a personal assistant opening that book to find a specific answer or make a purchase for their employer. The Functional Mechanics: How Google-Agent Operates To understand the impact of Google-Agent, it is important to look at what these AI agents are actually doing when they land on your server. According to Google’s documentation, these agents are capable of navigating the web much like a human would. This includes: Browsing and Contextual Evaluation: The agent can read the content of a page to determine if it meets the user’s needs. Task Completion: This is the most transformative aspect. Agents are designed to perform actions, such as filling out forms, interacting with dropdown menus, or moving through a multi-step checkout process. Direct Interaction: Instead of just clicking a link, the agent might submit a search query within a site’s internal search bar or click a “Subscribe” button based on a user’s prompt. This behavior is powered by advanced models that can interpret the DOM (Document Object Model) of a webpage and interact with elements programmatically. Because these actions are user-initiated, blocking Google-Agent could inadvertently block a legitimate customer who is simply using an AI tool to facilitate their interaction with your business. Technical Specifications: User Agent Strings and IP Ranges For developers and system administrators, the ability to identify Google-Agent in server logs depends on recognizing its specific user agent strings. Google has provided two versions: one for desktop-based agent activity and one for mobile-based activity. Desktop User Agent String The desktop version of the agent follows a standard format that identifies it as compatible with Chrome and Safari, while clearly labeling the Google-Agent identity: Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Google-Agent; +https://developers.google.com/crawling/docs/crawlers-fetchers/google-agent) Chrome/W.X.Y.Z Safari/537.36 Mobile User Agent String The mobile version mimics a Nexus 5X device, ensuring that the agent receives the mobile-optimized version of a website’s layout: Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/W.X.Y.Z Mobile Safari/537.36 (compatible; Google-Agent; +https://developers.google.com/crawling/docs/crawlers-fetchers/google-agent) In both instances, the “W.X.Y.Z” placeholders represent the version of Chrome being used by the agent at the time of the fetch. Importantly, Google has also published specific IP ranges for these agents. It is vital for security teams to whitelist these IP ranges in Web Application Firewalls (WAFs) and Content Delivery Networks (CDNs) to prevent the agent from being flagged as a malicious bot. Why Google-Agent Matters for SEO and Digital Marketing The introduction of Google-Agent isn’t just a technical update; it’s a strategic shift in how we measure web traffic. For the first time, webmasters can differentiate between “search traffic” and “agentic traffic.” This has several implications for the future of digital marketing. 1. Identifying Agent-Assisted Conversions Until now, if an AI tool visited a site to perform a task, it might have been lumped in with general bot traffic or misidentified as a standard browser visit. By filtering for Google-Agent in your logs, you can now track how many conversions—whether they are lead forms, newsletter signups, or product purchases—are being completed by AI assistants. This data is invaluable for understanding how your target audience is evolving their browsing habits. 2. Distinguishing Genuine User Intent from Background Crawling Standard SEO metrics often struggle to separate Googlebot’s “crawling for the sake of crawling” from meaningful interactions. Google-Agent provides a clear signal of high-intent traffic. If Google-Agent is visiting your site, it means a human has specifically asked an AI to look at your content. This is a “warm” lead in every sense of the word, and it signals that your content is being surfaced in AI-driven workflows. 3. Preparing for Agentic Search We are entering the era of “Agentic Search,” where users no longer want a list of blue links; they want a result that performs a task. If a user tells their AI, “Find the best flight to London and put it in my cart,” the AI will use Google-Agent to visit airline sites. By monitoring this traffic now, businesses can see how well their sites are handling these automated visitors and optimize the experience to ensure the agent doesn’t get “stuck” on a broken form or a complex CAPTCHA. The Challenges of Blocking vs. Allowing AI Agents With

Uncategorized

SMX Now: Learn how brands must adapt for AI-driven search

The Fundamental Shift in Digital Visibility The landscape of search engine optimization is undergoing its most radical transformation since the inception of the Google algorithm. For decades, the primary goal of digital marketing has been “ranking”—securing a spot in the coveted “ten blue links.” However, as generative AI continues to integrate into search engines through Google’s AI Overviews, Bing Chat, and specialized tools like Perplexity, the metrics for success are changing. Visibility in the modern era is no longer just about where you appear on a list. It now depends on whether your content is discovered, evaluated, and ultimately selected by an artificial intelligence model to serve as a definitive answer for a user. This shift marks the transition from Search Engine Optimization (SEO) to Generative Engine Optimization (GEO). To address these seismic changes, the industry is looking toward new methodologies. A pivotal moment for brands and marketers arrives on April 1 at 1 p.m. ET, as the new monthly SMX Now webinar series kicks off. This session, featuring the expert team from iPullRank, will provide a deep dive into the strategies brands must adopt to survive and thrive in an AI-first search environment. Introducing SMX Now: A Deep Dive into AI Search Strategy The debut of SMX Now brings together some of the most forward-thinking minds in the search industry. Led by iPullRank’s Zach Chahalis, Patrick Schofield, and Garrett Sussman, the webinar aims to demystify how generative engines process information. The core of the discussion revolves around iPullRank’s “Relevance Engineering” (r19g) framework. This framework is designed to help brands execute a successful GEO strategy through an omnichannel approach. Rather than focusing solely on keywords, Relevance Engineering looks at the underlying architecture of how AI interprets authority, relevance, and user intent. In this new paradigm, brands cannot afford to wait for the dust to settle. The mechanisms of AI search—such as query fan-outs, retrieval-augmented generation (RAG), and LLM (Large Language Model) citation—are already dictating which brands win and which ones disappear from the conversational interface. The Rise of Generative Engine Optimization (GEO) Generative Engine Optimization is the evolution of traditional SEO. While traditional SEO focuses on signals like backlinks, site speed, and keyword density to please a crawler, GEO focuses on how to make content “retrievable” and “citable” for a generative AI. AI models do not “search” the web in the same way a traditional crawler does. Instead, they utilize a process of retrieval where the model looks for the most relevant “chunks” of information to synthesize an answer. If your content is not structured correctly, or if it lacks the necessary semantic depth, the AI will bypass your brand in favor of a competitor who has optimized for the generative engine’s logic. The SMX Now session will break down the GEO strategy, emphasizing that success in this field is not universal. What works for a B2B SaaS company might not work for an e-commerce giant. This necessitates a tailored approach based on testing and specialized data analysis. Understanding Query Fan-Outs and AI Discovery One of the most technical yet crucial aspects of the upcoming webinar is the exploration of query fan-outs. In traditional search, a user enters a query, and the engine returns a list of matching documents. In AI-driven search, the process is much more complex. When a user asks a question, the AI may “fan out” that query into several sub-queries to gather a comprehensive set of data points. It explores various facets of the topic simultaneously to build a holistic response. For brands, this means your content must be capable of answering not just the primary question, but also the peripheral questions that the AI generates during the fan-out process. Understanding how AI search uses these fan-outs to discover and select sources is the first step in ensuring your content remains relevant. If your content is only optimized for a single keyword, it may be ignored during the broader retrieval phase of a generative search. The Three-Tier Measurement Model for the AI Era As the goals of search change, so too must the way we measure success. The standard KPIs of the last decade—click-through rates (CTR) and organic ranking positions—are becoming less reliable as standalone metrics. To combat this, the iPullRank team introduces a three-tier measurement model that focuses on the lifecycle of a piece of content within an AI engine: Tier 1: Discovery The first tier measures whether the AI engine is even aware of your content. This involves tracking how often your brand’s data is included in the “knowledge base” or the vector database used by the LLM. If you aren’t being discovered, you cannot be selected. Tier 2: Selection Selection occurs when the AI decides that your content is authoritative and relevant enough to be used in its synthesized response. This is the “evaluation” phase where the AI weighs your information against other sources. Measurement here involves looking at how often your brand is chosen as a primary source for an AI Overview or a chatbot response. Tier 3: Citation Impact The final tier is the impact of the citation. Even if an AI selects your content, the way it cites your brand matters. Does it provide a clear link? Does it mention your brand name with authority? Measuring the quality and frequency of these citations is the new benchmark for brand authority in the age of GEO. The Importance of Relevance Engineering (r19g) Relevance Engineering, or r19g, is a term coined to describe the technical alignment of content with the retrieval mechanisms of AI. It involves an omnichannel content strategy where every piece of data—from blog posts to product descriptions to social media updates—is structured to be machine-readable and semantically rich. During the SMX Now webinar, Zach Chahalis and his team will explain how brands can use r19g to ensure their content is retrieved, surfaced, and cited. This involves moving away from “thin content” and focusing on “high-density information” that provides clear value to the LLM. The framework also addresses the

Uncategorized

The parts of Performance Max you can actually control

The Evolution of Performance Max: From Black Box to Strategic Tool When Google first introduced Performance Max (PMax) to the digital marketing world, the reaction was polarized. For some, it represented the ultimate promise of machine learning—a “set-it-and-forget-it” solution that could navigate the complex web of Search, Display, YouTube, Discover, Gmail, and Maps. For others, particularly seasoned media buyers, it felt like a “black box” that stripped away the granular control they had spent years mastering. Fast forward to the present, and the landscape has shifted significantly. Performance Max is no longer an experimental campaign type; it is a central pillar of the Google Ads ecosystem. Recognizing the need for transparency, Google has gradually pulled back the curtain, introducing new reporting features and, more importantly, new levers of control. While the algorithm still handles the heavy lifting of bidding and real-time auctions, savvy marketers have learned that the key to PMax success lies in how you steer the AI. To get the most out of your budget, you must move beyond passive observation. By mastering the parts of Performance Max you can actually control, you can transform a broad automated campaign into a precision-engineered growth engine. Here is a comprehensive guide on the levers available to you and how to use them effectively. Control What You Can: Search Terms and Negative Keywords For a long time, the biggest grievance with Performance Max was the inability to prevent ads from appearing for irrelevant search queries. In traditional Search campaigns, negative keyword lists are the primary defense against wasted spend. In the early days of PMax, these were notoriously difficult to implement. The Shift to Campaign-Level Control Previously, adding negative keywords to a Performance Max campaign required a cumbersome manual process. Advertisers had to contact Google support, submit an Excel spreadsheet of desired exclusions, and wait for a representative to apply them to the back end of the account. This lack of agility was a major hurdle for brands with strict compliance needs or those operating in niche markets where the AI might misinterpret intent. Fortunately, Google has streamlined this process. One of the most impactful updates is the ability to add campaign-level negative keywords directly through the interface. By accessing the “Search Terms” report, you can now see exactly what queries are triggering your ads. If you spot a term that is irrelevant, low-intent, or brand-damaging, you can quickly select it and add it to a negative list. Protecting Brand Equity Negative keyword control isn’t just about saving money; it’s about brand safety. If your Performance Max campaign is cannibalizing your branded search traffic—often at a higher cost-per-click than a dedicated Brand campaign—you can use exclusions to force the AI to focus on prospecting. This ensures your PMax budget is spent finding new customers rather than paying for users who were already looking for you by name. Mastering Placements: Where Your Ads Actually Show Performance Max operates across the entire Google network, which includes millions of partner websites and apps. Without oversight, your ads can end up on “Made for Advertising” (MFA) sites, low-quality mobile games, or YouTube channels that don’t align with your brand values. The “Where Ads Showed” Report Google has recently made the Performance Max placements report more accessible. It has been moved from the general account reporting section into the “Where ads showed” section at the campaign level. This move simplifies the analysis process, allowing you to see which domains and apps are generating the most impressions. It is important to note that, currently, this report provides impression-level data rather than full conversion metrics. While this doesn’t give you the “why,” it certainly gives you the “where.” If you notice an astronomical number of impressions coming from a specific mobile app or a kids’ YouTube channel with zero meaningful engagement, you have identified a leak in your budget. Account-Level Exclusions for Global Control While you might not have a “Delete Placement” button directly inside the PMax campaign settings for every individual site, you can use Account-Level Exclusions. By navigating to Tools > Content Suitability > Advanced Settings > Excluded Placements, you can upload a list of domains or app categories that you want to block across your entire Google Ads account. This is the most effective way to ensure your Performance Max ads stay away from low-quality “click-farm” environments. Using Budget Signals and Scheduling to Improve Efficiency The AI behind Performance Max is designed to spend your daily budget as effectively as possible over a 24-hour period. However, the AI doesn’t always account for the nuances of your business operations. This is where manual ad scheduling becomes a vital control lever. The Power of Dayparting Even if you didn’t set a specific schedule during the initial setup, Google tracks performance data on an hourly basis. You can view this data in the “When and where ads showed” section. If you are running a B2B campaign, you might find that engagement drops significantly between 11 PM and 5 AM. If you are an SMB with a limited budget, every dollar spent during these off-peak hours is a dollar that could have been used during high-conversion windows. To take control, navigate to Campaigns > Audiences, keywords, and content > Ad schedule. By restricting your campaign to specific days or times, you concentrate your “firepower” when your audience is most likely to convert. This is particularly useful for businesses that rely on phone calls or live chat, as it prevents ads from running when no one is available to handle the leads. Refining Targeting with Strategic Constraints Performance Max relies on “Audience Signals” to find new customers, but these signals are just suggestions, not hard boundaries. To truly narrow your focus, you need to use the newer constraint features Google has introduced. Demographic Exclusions Demographic exclusions are a relatively new addition to PMax campaign settings. In the past, PMax would show ads to anyone it deemed likely to convert, regardless of age or gender. While this broad approach can

Uncategorized

3 Strategies That Can Survive AI Search In 2026: What I Shared At SEJ Live via @sejournal, @theshelleywalsh

The Evolution of Search: Why 2026 Represents a Turning Point The digital marketing landscape is currently navigating one of the most significant architectural shifts since the inception of the World Wide Web. For decades, the SEO industry has been built on a relatively simple transactional model: a user types a query, a search engine provides a list of links, and the user clicks on the most relevant one. However, as we look toward 2026, this model is being fundamentally dismantled by the rise of Generative AI and Large Language Models (LLMs). During the recent SEJ Live event, the discourse centered on a crucial realization: the traditional pursuit of “rankings” is becoming an obsolete metric. In a world where Google’s Search Generative Experience (SGE), Perplexity, and OpenAI’s SearchGPT provide direct answers, the goal is no longer just to be position one on a page of blue links. The goal is to be the source of truth that the AI cites, or the brand that the AI recommends. Survival in 2026 requires a radical shift in perspective. We are moving from a “search engine” era to an “answer engine” era. To remain relevant, marketers must move beyond keyword density and backlink counts to focus on visibility, authority, and the structural integrity of their information. Strategy 1: Transitioning from Keywords to Entity-Based Authority The first pillar of surviving the AI transition is understanding that AI models do not “read” keywords in the way old search algorithms did. Instead, they understand entities—people, places, things, and concepts—and the relationships between them. By 2026, your SEO strategy must be rooted in becoming an indisputable entity within your niche. Defining Your Entity in the Knowledge Graph Google and other AI-driven platforms rely on Knowledge Graphs to categorize information. If your brand is not recognized as a distinct entity with clear associations to specific topics, an AI is unlikely to surface your content in its generated responses. To build this authority, you must focus on consistency across the entire web. This involves more than just on-site content. It requires a robust presence on third-party platforms that AI models use as high-trust signals. This includes Wikipedia (where applicable), LinkedIn, industry-specific directories, and reputable news outlets. When an AI scans the web to verify a fact, it looks for consensus. If multiple high-authority sources point to your brand as an expert on a topic, your “entity” gains strength. The Role of Advanced Schema Markup Technical SEO in 2026 is less about meta descriptions and more about structured data. Schema markup is the language that allows you to talk directly to an AI’s database. By implementing deep, nested Schema—such as “Person,” “Organization,” “Author,” and “ReviewedBy”—you provide the explicit context that AI needs to understand who you are and why your information is credible. In the next two years, we will see a move toward “Knowledge Graph Optimization.” This means using SameAs links in your Schema to connect your website to your social profiles and other authoritative citations, effectively telling the AI, “All of these different data points represent the same trusted entity.” Strategy 2: Mastering Intent Mapping and the Conversational Funnel As search queries become longer and more conversational, the way we produce content must evolve. In 2026, the “short-tail” keyword will likely be dominated by AI-generated summaries that leave little room for organic clicks. To survive, publishers must target the nuances of the conversational funnel. Moving Beyond the “What” to the “How” and “Why” AI is exceptionally good at answering “What is…” questions. If your content strategy is based on defining basic terms, you are competing directly with the AI itself—a battle you are likely to lose. To capture traffic in 2026, your content must address the “How” and “Why.” This involves creating content that addresses complex, multi-step problems that require a level of nuance or personal experience that an LLM might lack. AI can tell a user what a mortgage is; it has a harder time providing a nuanced, first-person perspective on navigating a specific local real estate market during a period of fluctuating interest rates. By focusing on high-intent, complex queries, you position yourself in the areas where users still feel the need to click through for deeper reading. The Information Gain Score Google has signaled that “Information Gain” is a critical patent in its future ranking systems. This concept suggests that if your article contains the exact same information as ten other articles on the web, it has a low information gain score and is redundant. In an AI-heavy environment, redundancy is a death sentence. To be cited by an AI search engine, your content must provide something new—a unique data point, a proprietary study, a controversial expert opinion, or a highly specific case study. Strategy in 2026 should be less about high-volume content production and more about “Originality Architecture.” Every piece of content should ask: “What does this page offer that an AI could not have summarized from the top five results?” Strategy 3: Omnichannel Visibility and the Ecosystem of Trust The third strategy involves breaking the “Google Dependency.” In 2026, search will be fragmented. Users will find information through ChatGPT, TikTok, YouTube, Reddit, and specialized Discord communities. If your visibility strategy is confined to your domain name, you are invisible to a massive portion of the market. The Rise of “Social Search” and Community Validation We are already seeing a trend where users append “Reddit” to their search queries to find “real” human answers. AI search engines have noticed this and are increasingly sourcing data from community-driven platforms. Visibility in 2026 means having a presence where humans congregate. This doesn’t mean spamming forums; it means building a brand presence that is discussed and cited by others. When an AI summarizes a product category, it looks at sentiment across social media and review platforms. If your brand is frequently recommended in niche subreddits or mentioned in high-quality YouTube video transcripts, the AI perceives this as a signal of real-world popularity and trustworthiness. Optimizing for the

Uncategorized

Report: Clickout Media turned news sites into AI gambling hubs

The Rise of Scaled Reputation Abuse in Digital Publishing The digital publishing landscape is currently facing a predatory trend that threatens the integrity of search results and the survival of independent journalism. Recent investigations have shed light on the operations of Clickout Media, a company accused of acquiring established news and niche websites only to strip them of their original purpose. Instead of maintaining the editorial standards that built these sites’ reputations, the company allegedly transforms them into “AI gambling hubs,” flooding them with low-quality content designed to rank for high-value search terms before they are inevitably penalized by search engines. This practice, often described as a more aggressive form of “parasite SEO,” involves leveraging the existing domain authority of trusted brands to push offshore gambling links and cryptocurrency schemes. For readers and search engine users, it means that a once-reliable source for gaming news or tech reviews can almost overnight become a front for affiliate marketing, often using AI-generated text and fictitious author profiles to maintain a veneer of legitimacy. The Anatomy of a Digital Takeover The business model employed by Clickout Media follows a specific, calculated lifecycle. It begins with the acquisition of websites that have spent years, or even decades, building trust with both readers and search engines like Google. These sites—covering everything from niche sports and gaming to technology and even local charity work—possess high “Domain Authority” (DA), a metric that indicates how well a site is likely to rank in search results. Once a site is acquired, the transformation is rapid. Former employees have reported that the original editorial staff is often sidelined or laid off, and the core mission of the publication is abandoned. In its place, a massive volume of content is produced, primarily focused on online casinos, sports betting, and unregulated cryptocurrency platforms. The goal is simple: capture as much traffic as possible for lucrative search queries like “best online slots” or “top crypto casinos” while the site still benefits from its previous reputation. The Maintenance of a Credible Facade To avoid immediate detection by search engine algorithms or the existing audience, the transition is sometimes handled in phases. For a brief period after an acquisition, a company might continue to publish a small amount of legitimate coverage. This creates the illusion that the site is still active in its original niche. However, beneath the surface, the infrastructure is being pivoted toward high-revenue affiliate deals. During this phase, the ratio of genuine reporting to promotional gambling content shifts dramatically. Eventually, the genuine reporting disappears entirely, replaced by thousands of articles that serve no purpose other than to house affiliate links. These links direct users to gambling sites where the publisher earns a commission—often a percentage of the money the referred player loses. How AI Facilitates Search Spam at Scale The speed at which these sites are repurposed is made possible by generative AI. Traditional editorial workflows require time, research, and human verification. In the “AI gambling hub” model, these requirements are seen as bottlenecks. Instead, AI tools are used to churn out reviews, “top 10” lists, and guidebooks at a scale that human writers could never match. This content is rarely original. It often scrapes information from other sources and reformulates it to include specific keywords that help it rank. Because the content is generated by machines, it can be published in massive batches, allowing a single domain to target thousands of different search queries simultaneously. While the quality is often poor, the existing authority of the domain can trick search algorithms into ranking the content highly, at least in the short term. The Use of Fake Author Profiles To satisfy Google’s focus on E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness), these operations frequently use fake author profiles. These profiles often feature AI-generated headshots and fabricated biographies claiming years of experience in the gambling or financial sectors. By creating these digital ghosts, the publishers attempt to bypass search engine filters that look for signs of low-quality or untrustworthy content. This deliberate deception is a hallmark of what search experts call “reputation abuse.” Parasite SEO vs. Site Reputation Abuse In the world of Search Engine Optimization (SEO), “parasite SEO” traditionally refers to the practice of publishing content on a third-party website (like a major news outlet or a platform like Medium) to take advantage of that site’s ranking power. However, the strategy attributed to Clickout Media goes a step further. Rather than just placing an article on someone else’s site, they buy the entire site and use its reputation as a “host” for spam. Google has clarified its stance on this, referring to extreme cases of this behavior as “site reputation abuse.” According to Google’s policies, publishing third-party content or low-quality content at scale for the primary purpose of manipulating search rankings is a direct violation of their guidelines. When a site that was once a legitimate news brand is suddenly filled with thousands of AI-generated casino reviews, it triggers a red flag for “reputation abuse.” The Consequences for Search Visibility The lifecycle of these hijacked sites usually ends in a “manual action” or a significant algorithmic penalty from Google. Once the search engine identifies that a site is no longer providing its original value and is instead being used to game the system, the site is often deindexed. This means it disappears from search results entirely. For Clickout Media and similar operators, this appears to be an accepted cost of doing business. The strategy is essentially a “pump and dump” scheme for digital assets. They extract as much affiliate revenue as possible during the months it takes for Google to catch up, and once the domain is penalized and loses its traffic, they move on to the next acquisition. The original brand, which may have taken a decade to build, is left a hollowed-out shell, permanently barred from search results. The Impact on the Media Industry The human cost of this business model is significant. When a reputable news site is converted

Uncategorized

How to use first-party data to find high-impact content ideas

The Hidden Crisis of Modern Content Marketing In the current digital landscape, most content marketers and SEO practitioners are fishing in the exact same pond. We all have subscriptions to the same high-end SEO toolsets, we analyze the same competitor keyword gaps, and we follow the same “best practices” dictated by search engine algorithms. While these tools are indispensable for understanding market trends, they have inadvertently created a massive “echo chamber” of commoditized content. If you and your five closest competitors are all looking at the same Semrush or Ahrefs data, you are likely producing nearly identical content. This leads to a sea of sameness where brand authority is diluted, and the user is left scrolling through ten versions of the same article. In an era where Generative AI can summarize generic information in seconds, being “just like everyone else” is a recipe for invisibility. There is, however, a significant competitive advantage sitting right under your nose: your first-party data. This is information that your competitors cannot buy, scrape, or replicate. It is the specific, nuanced, and often messy data generated by your actual customers and prospects. When you learn to mine this data for content ideas, you stop guessing what people want and start addressing exactly what they are asking for. Understanding the Shift: Why Third-Party Tools Create an Echo Chamber Third-party SEO tools are excellent at measuring existing search demand. they provide estimates on keyword volume, difficulty scores, and SERP (Search Engine Results Page) layouts. However, these tools are retrospective—they tell you what has already happened and what others are already doing. They don’t necessarily reflect the unique pain points of your specific customer base. When content is created solely based on third-party metrics, the result is often “SEO-first” content rather than “audience-first” content. This approach ignores the specific language, internal jargon, and burning questions that emerge during a real-world sales cycle. By relying exclusively on these tools, organizations risk getting lost in a high-competition environment where the only way to win is through sheer volume or massive backlink budgets. To break out of this cycle, you must pivot toward data that is proprietary to your organization. By leveraging first-party insights, you can create high-impact content that resonates on a deeper level, drives higher conversion rates, and establishes true topical authority that AI models and search engines alike will recognize as unique. What Exactly Is First-Party Data in a Content Context? For the modern marketer, first-party data refers to any information collected directly from your audience through your own channels. It is the “inside track” on customer behavior. While many think of first-party data only in terms of privacy regulations and tracking cookies, its true value lies in the qualitative insights it provides for content strategy. There are five primary “goldmines” where these high-impact content ideas are hidden: 1. Internal Site Search Queries Your website’s search bar is essentially a direct line to your user’s brain. When someone uses your internal search, they are telling you exactly what they expected to find on your site but couldn’t locate easily. These queries represent immediate content gaps. If hundreds of people are searching for “how to integrate with Slack” on your site and you don’t have a dedicated page for it, you have a high-priority content opportunity that no keyword tool would have flagged as specific to your brand. 2. Sales Call Transcripts and Recordings Sales teams are on the front lines every day. Tools like Gong, Chorus, or even simple Zoom transcriptions are filled with the exact language prospects use. They reveal the specific fears, uncertainties, and doubts (FUD) that prevent a deal from closing. If a certain question comes up in 40% of discovery calls, that question deserves a comprehensive, high-quality blog post or video. 3. CRM Data and Deal Notes Your Customer Relationship Management (CRM) system, such as Salesforce or HubSpot, is a graveyard of “lost deals” and “closed-won” patterns. By analyzing why deals were lost—perhaps to a specific competitor or due to a lack of a certain feature—you can create “defensive” content that addresses those specific comparison points before the next prospect reaches out. 4. Customer Support Tickets The support team deals with the “aftermath” of the customer journey. If your support queue is flooded with the same five questions, your documentation or top-of-funnel content is failing. Transforming support tickets into “How-To” guides or “Troubleshooting” articles not only improves SEO but also reduces the load on your support staff, creating a double win for the company. 5. Email Engagement and Replies Email marketing is often treated as a one-way broadcast, but the most successful marketers treat it as a conversation. The replies you receive to your newsletters—and the specific links that get clicked versus those that are ignored—provide real-time feedback on what topics actually move the needle for your existing audience. The Strategic Advantages of a First-Party Data Strategy Using first-party data isn’t just a “nice-to-have” tactic; it is a fundamental shift that provides three distinct advantages over your competitors. It Is Wholly Proprietary The most significant advantage is that this data is yours and yours alone. Your competitors can use tools to see which keywords you rank for, but they cannot see what your customers are asking in private sales calls. They cannot see your internal search logs. This allows you to build a “content moat.” While they are busy fighting for high-volume, generic keywords, you can dominate the niche, high-intent queries that actually lead to revenue. It Solves the “Curse of Knowledge” The “Curse of Knowledge” is a cognitive bias where experts find it difficult to imagine what it’s like not to know something. In marketing, this leads to using technical jargon that customers don’t actually use. For example, a company might sell “advanced aqueous filtration systems,” while their customers are simply searching for “how to fix my stinky tap water.” First-party data forces you to use the language of the buyer, ensuring your content is accessible and relevant. It Maps Perfectly to

Scroll to Top