Author name: aftabkhannewemail@gmail.com

Uncategorized

Google pushes AI Max tool with in-app ads

The Evolution of Google Ads: From Tools to Internal Marketing The digital advertising landscape is undergoing a fundamental transformation, driven by the rapid integration of artificial intelligence. For years, Google Ads has been the primary dashboard for marketers to manually control their search presence. However, a significant shift is occurring in how Google manages its relationship with advertisers. Recent observations within the Google Ads interface indicate that the company has begun a more aggressive push for its AI-driven features, specifically the “AI Max” tools for Search campaigns, by placing promotional advertisements directly within the campaign settings and workflow panels. This move marks a departure from traditional software updates. Typically, platforms introduce new features through release notes, blog posts, or subtle “new” badges in the menu. By placing promotional messages directly in the areas where advertisers conduct routine audits and updates, Google is signaling that AI adoption is no longer just an option—it is a core business priority that they are willing to market internally to their existing user base. What is AI Max for Search? To understand why Google is pushing this tool so hard, it is essential to define what AI Max for Search represents. While “Performance Max” (PMax) has been a household name in the PPC (Pay-Per-Click) community for some time, the push for AI Max within Search campaigns represents the next step in automated advertising. This suite of tools leverages Google’s most advanced machine learning models to automate bidding, keyword selection, and even creative asset generation. AI Max tools are designed to look beyond exact match keywords and manual bidding strategies. Instead, they analyze trillions of data points in real-time to predict which search queries are most likely to lead to a conversion. By using “AI Max” features, advertisers essentially hand over the steering wheel to Google’s algorithms, trusting that the system can optimize ROI more effectively than a human manager could through manual adjustments. The Discovery: In-App Ads for AI Tools The industry first caught wind of this new promotional strategy when Julie Bacchini, president and founder of Neptune Moon and a prominent voice in the PPC community, noticed a peculiar notification. While working inside a campaign’s settings panel, she was met with a promotional message explicitly encouraging the use of AI Max for Search. Bacchini shared her findings on LinkedIn, noting that it felt like Google was “essentially running an ad for AI Max in the settings area of a campaign.” This is a strategic placement. The settings panel is where experienced marketers go to fine-tune their campaigns, adjust geographical targeting, and manage budgets. By placing a promotion here, Google is intercepting the workflow of the very people who might be the most skeptical of automated tools. It is a direct challenge to the manual control that many high-level advertisers still prefer to maintain. Why Google is Adopting an Aggressive Promotion Strategy There are several strategic reasons why Google is opting for in-platform advertisements rather than traditional marketing channels for its AI features. The tech giant is currently navigating a competitive landscape where AI is the primary battleground, and user adoption metrics are critical for long-term success. 1. Accelerating the Transition to Automated Bidding Google has long been moving toward a “black box” approach to advertising. In this model, the advertiser provides the goals and the budget, and the AI handles the execution. However, many seasoned advertisers have been slow to adopt these features, fearing a loss of transparency and control. By inserting ads directly into the management interface, Google is attempting to normalize AI Max and lower the barrier to entry for those who have previously resisted the transition. 2. The Data Feedback Loop Artificial intelligence is only as good as the data it processes. For Google’s AI models to improve, they need massive amounts of campaign data to train on. The faster Google can get advertisers to switch to AI Max, the faster its systems can learn from diverse industries, consumer behaviors, and conversion paths. This creates a feedback loop where more adoption leads to better AI performance, which in turn justifies further adoption. 3. Competitive Pressure from Meta and TikTok Google is not the only player in the automated advertising space. Meta’s “Advantage+” campaigns and TikTok’s automated ad solutions have seen high adoption rates because they simplify the process for small and medium-sized businesses. Google must ensure its platform remains the most efficient choice for marketers who are increasingly looking for “set it and forget it” solutions that deliver results without requiring a full-time specialist to manage them. The Implications for Search Marketers The introduction of in-app ads for AI tools creates a new dynamic for digital marketing agencies and in-house teams. When a platform begins marketing its own features within the workspace, it changes the relationship between the tool and the user. There are both benefits and risks to this new approach. Efficiency vs. Control The primary benefit of AI Max tools is efficiency. For businesses with limited time, these automated features can handle complex tasks like responsive search ads, smart bidding, and broad match expansion. However, the trade-off is often a lack of granular data. Marketers who rely on specific keyword data to inform their broader business strategies may find that AI Max hides the very insights they need to grow their brand outside of the Google ecosystem. The “Nudge” Effect In behavioral economics, a “nudge” is a small intervention that influences behavior without forbidding any options. By placing these promotions in the campaign settings, Google is using a powerful nudge. A busy account manager might click “apply” or “learn more” simply because the prompt is conveniently located. This could lead to a silent shift in how accounts are managed, where AI-driven settings become the default not because they are always better, but because they are the most visible. The Risk of Increased Costs One of the criticisms often leveled at automated tools is that they can prioritize volume over efficiency if not properly constrained. AI Max tools are

Uncategorized

Bing Webmaster Tools officially adds AI Performance report

The Evolution of Search Metrics: Bing’s Move Into AI Attribution The landscape of search engine optimization is undergoing its most significant transformation since the advent of mobile-first indexing. As artificial intelligence becomes deeply integrated into how users find information, the traditional metrics of clicks, impressions, and rankings are no longer the only markers of success. In a major move to provide transparency in this new era, Microsoft has officially launched the AI Performance report in Bing Webmaster Tools. Currently in public beta, this new dashboard is designed to help webmasters, SEO professionals, and content creators understand how their work is being utilized by generative AI. Specifically, it tracks how often a website’s content is cited as a source within Microsoft Copilot, Bing’s AI-powered summaries, and various third-party partner integrations that leverage Bing’s index to ground their AI models. This launch marks a pivotal moment for “Generative Engine Optimization” (GEO), providing the first real set of data points for those trying to optimize for the AI-first web. What is the AI Performance Report? The AI Performance report is a dedicated dashboard located within the Bing Webmaster Tools suite. Its primary function is to track “citations.” In the world of generative AI, a citation is a link or a reference that the AI provides to indicate where it retrieved the information used to generate its response. When a user asks Microsoft Copilot a question, the AI scans the web to find reliable data. If it uses your website to formulate that answer, the AI Performance report will log that event. For years, SEOs have relied on the Performance report in Bing Webmaster Tools (or Google Search Console) to see which keywords drove traffic. The AI Performance report operates on a different logic. It doesn’t necessarily track “search queries” in the traditional sense; instead, it tracks “grounding queries”—the prompts or searches that led the AI to use your specific pages as the factual foundation for its output. Microsoft first began testing this feature in late January, and its full release into public preview signifies the company’s commitment to an open ecosystem. By showing publishers how they contribute to the AI’s knowledge base, Microsoft is attempting to bridge the gap between AI consumption and content creation. Key Metrics Explained: Decoding the Dashboard The new dashboard introduces several specific metrics that differ from traditional search analytics. To gain value from the AI Performance report, it is essential to understand what each of these data points represents and how they interact with one another. Total Citations This is the headline figure of the report. It represents the total number of times any page from your website was cited as a source in an AI-generated answer during a specific period. It is the AI equivalent of an “impression,” but with a higher level of significance, as it implies your content was deemed authoritative enough to serve as a primary source for the AI’s response. Average Cited Pages This metric calculates the daily average of unique URLs from your site that are referenced across AI experiences. If you have a large content hub, this number helps you understand the “breadth” of your authority. A high number of total citations coming from only one or two pages suggests you have a few “blockbuster” articles, whereas a high average of cited pages indicates that the AI views your entire domain as a reliable resource across multiple topics. Grounding Queries Perhaps the most valuable part of the report for SEOs, Grounding Queries are the specific phrases or questions users typed into Copilot or Bing that triggered the AI to use your content. This functions similarly to keyword data but offers a glimpse into the conversational nature of AI interactions. By analyzing these queries, publishers can see the exact intent their content is satisfying in the eyes of the AI. Page-Level Citation Activity This section breaks down performance by individual URL. It allows you to see which specific pages are the workhorses of your AI visibility. If a page is getting high citations but low traditional search traffic, it may be because it is highly factual and well-structured—ideal for AI grounding—even if it isn’t ranking in the top three of a standard SERP. Visibility Trends Over Time Like any performance tracker, the AI Performance report includes a timeline view. This allows webmasters to see if their AI visibility is growing or shrinking. It is particularly useful for tracking the impact of content updates or seeing how changes in the AI models (like an update to Copilot) affect how often your site is referenced. The Rise of Generative Engine Optimization (GEO) With the release of this tool, Microsoft is essentially legitimizing the field of Generative Engine Optimization. For a long time, SEO was about matching keywords and building backlinks to climb a list of ten blue links. GEO is different; it is about ensuring your content is the most “extractable” and “verifiable” source for an LLM (Large Language Model). Microsoft has explicitly stated that this tool is an early step toward helping publishers navigate this shift. To perform well in AI citations, the requirements are slightly different than traditional SEO. While standard SEO best practices still apply, GEO places a heavy emphasis on: Information Density: Providing direct answers to complex questions. Structural Clarity: Using H2 and H3 tags, bullet points, and tables that AI can easily parse. Factual Accuracy: AI models are increasingly tuned to prefer “grounded” and “verified” facts over fluff. Entity Representation: Ensuring that the people, places, and products mentioned on your site are clearly defined so the AI can connect them to its existing knowledge graph. The Missing Piece: The Traffic and Click-Through Dilemma While the AI Performance report is a welcome addition to the webmaster’s toolkit, it is not without its limitations. The primary criticism from the SEO community is the lack of click-through data. Currently, the report shows you that you were cited, but it does not tell you if the user actually clicked the citation to

Uncategorized

How to make automation work for lead gen PPC

The Challenge: Why PPC Automation Often Fails B2B Lead Gen The digital advertising landscape has undergone a radical shift toward automation. Google, Microsoft, and Meta have spent years refining machine learning models designed to take the guesswork out of bidding, targeting, and creative placement. However, for B2B marketers, this shift has been met with significant frustration. The reality is that most advertising automation tools were built with the ecommerce model in mind, not the complex, high-friction world of business-to-business lead generation. In ecommerce, the path to conversion is straightforward: a user clicks an ad, browses a product, and completes a purchase within minutes or hours. The conversion volume is high, the “cart value” is immediate, and the feedback loop for the algorithm is nearly instantaneous. B2B lead generation operates on an entirely different plane. Sales cycles can last 18 to 24 months, conversion volumes are often low, and the “value” of a lead is rarely clear at the moment of the initial form fill. Because of these discrepancies, many B2B advertisers find that turning on automation results in a flood of low-quality leads, wasted spend, and inconsistent performance. But automation is no longer optional. To stay competitive, B2B marketers must find ways to make these systems work. The secret lies in moving away from “black box” automation and toward a strategy of “informed automation.” By providing the right signals and data structures, you can train the algorithms to understand the nuances of B2B buying cycles. The Fundamental Obstacles in B2B Automation To fix automation, we must first understand why it struggles. Melissa Mackey, Head of Paid Search at Compound Growth Marketing, identifies three core challenges that B2B advertisers face when interacting with machine learning. 1. The Customer Journey Duration Google’s automation performs at its peak when the journey from click to conversion is short. However, B2B journeys are notorious for their length. When a prospect engages with an ad today, they might not become a paying customer for another year. Standard tracking systems often have a “lookback” window. For example, offline conversion tracking in Google Ads typically only looks back 90 days. If your sales cycle exceeds this, the algorithm loses the connection between the initial ad spend and the eventual revenue, making it impossible for the system to optimize for ROI. 2. The Conversion Volume Threshold Machine learning thrives on data density. Google generally recommends about 30 conversions per campaign per month for its Smart Bidding algorithms to function effectively. While it can technically operate with less, the performance often becomes volatile. For niche B2B software or high-ticket consulting services, generating 30 high-quality “Bottom of Funnel” leads per month per campaign is often a monumental task. Without enough data points, the automation begins to “guess,” often leading to poor targeting decisions. 3. The Absence of Instant Value In the ecommerce world, a $10 transaction is fundamentally different from a $1,000 transaction, and the system knows this instantly. In lead gen, every form fill looks the same to a basic tracking pixel. A student downloading a whitepaper for a thesis and a CTO looking for an enterprise solution both count as “one conversion.” Without assigned values, automation will naturally gravitate toward the easiest (and often lowest quality) conversion to hit its volume targets. The Essential Foundation: Offline Conversion Tracking (OCT) If you want automation to work for lead generation, connecting your CRM to your advertising platform is the single most important step you can take. This isn’t just a “nice-to-have” feature; it is the fundamental infrastructure required for B2B success in the modern era. If you are still only tracking website form fills, you are only seeing a fraction of the picture. Offline Conversion Tracking (OCT) allows you to “close the loop” by feeding data from your CRM (like HubSpot or Salesforce) back into Google Ads or Microsoft Ads. This tells the system not just that a lead was generated, but that the lead turned into a Marketing Qualified Lead (MQL), then a Sales Qualified Lead (SQL), and finally a closed-won deal. Integrating CRM Data For those using industry-standard tools like HubSpot or Salesforce, the integration is often native. You can link the accounts and select which “Lifecycle Stages” should be counted as conversions. For businesses using custom CRMs or less common platforms, tools like Google Ads Data Manager or Snowflake can be used to create custom data tables. Even if a direct integration doesn’t exist, middleware like Zapier can act as a bridge. While there may be a subscription cost for these tools, the ability to optimize for “Sales Qualified Leads” rather than “Raw Form Fills” typically results in a much higher return on ad spend (ROAS). Strategic Value Assignment: Training the Algorithm Once your tracking is in place, you must move beyond binary conversion tracking. You need to tell the algorithm what different actions are worth. This is known as Value-Based Bidding (VBB). By assigning relative values to different actions, you create a hierarchy of importance that guides the machine learning process. Consider a simple value structure to signal intent levels to the system: Video Views (Value: 1): This indicates basic brand awareness or curiosity. It is a low-intent signal. Ungated Asset Downloads (Value: 10): The user is interested enough to spend time with your content, but hasn’t committed to a sales conversation. Form Fills / Demo Requests (Value: 100): This is a high-intent “hand-raiser” who is willing to share personal information to hear from you. Marketing Qualified Leads (Value: 1,000): This is your primary “North Star” metric. By giving this a value 1,000 times higher than a video view, you tell the system that one MQL is worth more than 999 video views. Without these weighted values, your campaigns might show a high conversion rate but produce zero revenue. The automation will simply find the path of least resistance—which usually means targeting people who like to watch videos but have no intention of buying software. When you add values, you can switch from a “Maximize Conversions”

Uncategorized

Why governance maturity is a competitive advantage for SEO

Why governance maturity is a competitive advantage for SEO Imagine this scenario: you have spent the last three months meticulously building a high-performance product taxonomy. You have refined the schema markup, optimized the internal linking structure, and crafted metadata that promises to dominate the search results. On paper, everything is perfect. Then, without warning, the product team launches a site redesign over the weekend. They did not consult you, and they did not run a staging test for SEO impact. By Monday morning, half of your high-value URLs are returning 404 errors. The new templates have stripped out the structured data you spent weeks implementing. When your boss asks why organic traffic has plummeted by 40%, you are left scrambling for answers. This is a nightmare that many SEO professionals know all too well. However, this is not an SEO failure in the technical sense; it is a fundamental failure of governance. Weak governance is the silent killer of search performance. It is the reason why talented SEO teams spend their nights and weekends fixing preventable problems instead of driving growth. In an era where AI is rapidly changing the search landscape, the stakes have never been higher. To move from a state of constant firefighting to one of strategic prevention, organizations must embrace the Visibility Governance Maturity Model (VGMM). High governance maturity is no longer just a corporate “nice-to-have”—it is a distinct competitive advantage. Governance Is Your Insurance Policy, Not Bureaucracy When most people hear the word “governance,” they envision endless meetings, red tape, and layers of approval that slow down innovation. In reality, SEO governance is your insurance policy. It is the framework that protects your hard work from being accidentally destroyed by teams who do not fully understand the nuances of search engines. The Visibility Governance Maturity Model (VGMM) is not about creating obstacles. It is about establishing clear ownership, documenting standardized processes, and defining decision rights. When a company has high governance maturity, SEO is no longer an afterthought—it is a mandatory checkpoint in the development lifecycle. Implementing a governance model offers several immediate benefits for the SEO professional and the organization at large: Protection of Investment: It prevents releases from undoing months of optimization work. Standardization: It creates documented standards so that developers and content creators know the requirements upfront, reducing the need for repetitive explanations. Clear Accountability: It defines who is responsible for what, ensuring that SEO professionals are not held liable for technical errors made by other departments without their knowledge. Strategic Inclusion: It ensures SEO has a seat at the table during the planning phases of any major digital project. Executive Visibility: It translates SEO efforts into a language that leadership understands: risk management and business continuity. The AI Challenge: Why Governance Is Now Mandatory A few years ago, SEO was relatively straightforward. You optimized for your website, and you optimized for Google. Today, the digital ecosystem is far more fragmented and volatile. The rise of Generative AI has introduced a new layer of complexity that traditional SEO tactics cannot solve alone. Modern SEOs are now tasked with optimizing for a variety of AI-driven surfaces, including: AI Overviews (AIO): Google’s automated summaries that can rewrite or synthesize your content, often reducing click-through rates. ChatGPT and Claude Citations: Large Language Models (LLMs) that may or may not link back to your original source. Perplexity Summaries: Search engines that pull data from various competitors and present it in a unified answer. Voice Assistants: Systems that typically cite only one authoritative source, making the “winner-take-all” dynamic more intense. Knowledge Panels: Dynamic entities that sometimes pull conflicting data from across the web. Beyond these external challenges, internal issues remain. Content teams might be flooding the site with AI-generated “fluff” that lacks E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness). Developers might overlook crawl budgets during a platform migration. Product managers might launch features that inadvertently break structured data. Without a governance framework, you are the only person who understands how these moving parts interact. If you are the sole guardian of search visibility, the system is fragile. Governance builds resilience into the organization so that visibility is maintained regardless of who is in the office. The Five Levels of the Visibility Governance Maturity Model The VGMM evaluates whether an organization is structured to sustain SEO performance over the long term without burning out its staff. Understanding where your organization falls on this spectrum is the first step toward improvement. Level 1: Unmanaged (The Reactive State) In this stage, SEO is treated like a fire drill. There is no clear ownership, and changes to the website happen haphazardly. The SEO team (if one even exists) usually discovers problems only after they have impacted traffic. Documentation is non-existent, and the culture is one of constant crisis management. Level 2: Aware (The Initial Recognition) Leadership begins to recognize that SEO is important, but there is still no formal authority. Some standards might exist in an old Google Doc somewhere, but they aren’t enforced. You may have allies in the dev or content teams, but their support is voluntary and can be withdrawn whenever they get busy. Improvements are often temporary and easily reversed by the next quarterly update. Level 3: Defined (The Tactical Foundation) Ownership is finally documented. There are clear SEO standards, and at least some teams are required to follow them. SEO is consulted before major site changes, and there is a basic QA checkpoint in place. At this level, the “firefighting” begins to subside, and the SEO team can start working more predictable hours. Level 4: Integrated (The Strategic Workflow) This is where SEO becomes a competitive advantage. Search requirements are built into the release workflows and the CMS itself. Automated tools catch technical errors before they are shipped to production. Cross-functional teams share accountability for traffic goals. At this stage, the SEO lead can actually take a vacation without worrying that the site will be unrecognizable upon their return. Level 5: Sustained (The Gold Standard) In

Uncategorized

Why PPC measurement feels broken (and why it isn’t)

The Perception of Decay in Digital Advertising If you have spent any significant amount of time managing Pay-Per-Click (PPC) accounts, you do not need a whitepaper or a research report to tell you that the ground has shifted beneath your feet. The indicators are everywhere, appearing in the subtle daily frictions of campaign management. You see it when Google Click Identifiers (GCLIDs) are mysteriously missing from your URLs. You see it when conversions that used to appear in real-time now arrive with a three-day lag. Perhaps most frustratingly, you see it in the monthly reporting meetings where you find yourself spending more time explaining why the data looks “off” than actually discussing strategy. When these discrepancies occur, the natural reflex for many digital marketers is to assume that something has broken. We hunt for a technical glitch, a misconfigured tracking script, or a platform update that went haywhere. We treat the lack of data as a bug to be fixed. However, the reality is far more complex and, in some ways, more permanent. The truth is that PPC measurement is not broken; it has evolved into a new state. Many of our current measurement setups are built on an aging foundation—an assumption that a unique identifier will reliably and consistently follow a user from their initial click all the way to a final conversion. In the modern, privacy-first web, that assumption is no longer valid. The conditions that allowed for perfect, deterministic tracking have been eroded by a combination of legislative changes, browser restrictions, and shifting consumer expectations. A Legacy of Precision: The Deterministic Era To understand why the current environment feels so disorienting, we have to look back at the era that defined our expectations. For the better part of two decades, Google Ads (formerly Google AdWords) made digital advertising feel uniquely measurable, controllable, and predictable. In the early days, before Google Ads even offered native conversion tracking, advanced advertisers were building their own bespoke systems. They used custom tracking pixels and complex URL parameters to stitch together the customer journey. This was the era of the Urchin Software Corporation—the company Google eventually acquired to create what we now know as Google Analytics. That acquisition signaled a shift toward standardized, comprehensive measurement where nearly every interaction could be tracked and attributed at the individual click level. In this “Old World” of measurement, the process followed a very specific, linear path: 1. A user performed a search and clicked an ad. 2. A GCLID was appended to the destination URL. 3. The advertiser’s website captured that ID and stored it in a first-party cookie. 4. When a conversion occurred (such as a form fill or a purchase), that specific ID was sent back to the platform. This created a deterministic match. You could point to a specific click at 2:14 PM on a Tuesday and link it directly to a conversion at 9:05 AM on Friday. This level of granularity allowed for high-confidence attribution and made it easy to explain ROI to stakeholders. But this model was only possible because browsers allowed parameters to pass through unimpeded, cookies persisted for long durations, and users generally accepted tracking as the default state of the internet. The Great Erosion: Why the Old Model Fails Today The reliability of deterministic tracking depended on a set of technical conditions that no longer exist. Today’s browser environment is actively hostile to the types of tracking we once took for granted. Apple’s Intelligent Tracking Prevention (ITP) was a watershed moment. By limiting the lifespan of cookies and stripping identifiers from URLs, Safari fundamentally changed the rules of the game. Other browsers like Firefox followed suit with Enhanced Tracking Protection (ETP), and Google’s own Chrome has been navigating the slow, often-delayed transition toward a cookieless future via the Privacy Sandbox. Beyond the browsers, we have the rise of privacy regulations like GDPR in Europe and CCPA in California. These laws forced the implementation of consent banners. If a user clicks “Reject All,” the measurement chain is broken before it even begins. Private browsing modes and ad-blocking software further contribute to the “signal loss.” In this environment, URL parameters may be stripped before the page even loads. Cookies set via JavaScript might expire in 24 hours rather than 30 days. This isn’t a technical error; it is the browser performing exactly as designed. Trying to “fix” this by finding workarounds to restore click-level tracking is often a losing battle. It is a fight against the tide of privacy-centric engineering. The Psychological Challenge of Partial Observability The shift in PPC measurement is not just a technical hurdle; it is a psychological one. This is most apparent in the industry’s reception of Google Analytics 4 (GA4). Much of the frustration surrounding GA4 stems from the fact that it was built for a world where some data will always be missing. In Universal Analytics, the data felt absolute. In GA4, the data is often modeled. This transition from “observable” data to “inferred” data is jarring for advertisers who were trained to rely on absolute numbers. We are now operating in a world of partial observability. We have to accept that we are seeing a representative sample of reality, rather than a mirror image of it. This shift requires a change in how we spend our time. Too often, marketers spend hours tweaking ad platform settings—adjusting bids by pennies or rewriting headlines—when the more impactful work would be hardening the data infrastructure. If the input data is incomplete or low-quality, the most sophisticated automated bidding algorithm in the world cannot save the campaign. The Role of Infrastructure: Client-Side vs. Server-Side As we move away from traditional tracking, two distinct approaches have emerged to keep measurement viable: client-side and server-side. Client-side measurement, which relies on pixels like the Google Tag, is still necessary. These pixels fire immediately upon an action and provide the fast feedback loops that automated bidding systems crave. However, because they run in the user’s browser, they are the

Uncategorized

How SEO leaders can explain agentic AI to ecommerce executives

The digital landscape is currently navigating a period of rapid evolution, and at the center of this transformation is a concept that often feels more like science fiction than a business strategy: agentic AI. For SEO leaders operating within the ecommerce sector, the challenge is no longer just about optimizing for a search engine result page. Instead, it is about preparing an entire organization for a future where software agents participate in the decision-making process alongside—or even on behalf of—human consumers. Ecommerce executives are inundated with headlines promising total automation and the end of traditional search. They are hearing about autonomous agents that can research, compare, and purchase products without a single human click. In this environment, the role of the SEO leader is to act as a bridge. You must translate the technical complexities of agentic AI into a strategic framework that executives can understand, act upon, and fund. This requires moving beyond the hype and focusing on how these systems change the fundamental mechanics of growth, risk, and brand visibility. Start by explaining what ‘agentic’ actually means The first hurdle in any executive conversation is terminology. “AI” has become a catch-all term that often obscures more than it reveals. To have a productive discussion, SEO leaders must define what makes an AI system “agentic.” The most important distinction to make is that agentic systems do not replace the customer; they act as a proxy for the customer. In a traditional ecommerce journey, the human does all the heavy lifting: they search, they click through multiple tabs, they read reviews, they compare prices, and they navigate the checkout process. In an agentic journey, the human provides the intent, the preferences, and the constraints, while the software agent performs the labor. When speaking to leadership, use a framing that emphasizes continuity rather than total disruption: “We aren’t losing our customers to machines. We are seeing a new type of decision-maker enter the journey—a software proxy that acts on the customer’s behalf to handle discovery, comparison, and execution.” By defining agents as tools for efficiency rather than replacements for human desire, you can move the conversation from a place of fear to a place of practical preparation. The goal is to ensure the brand is ready to “talk” to these agents as effectively as it currently talks to human shoppers. Keep expectations realistic and avoid the hype One of the most valuable services an SEO leader can provide is a sense of perspective. The “AI hype cycle” often leads executives to believe that radical change will happen overnight. This leads to two dangerous extremes: panic and dismissal. Panic results in teams rewriting long-term strategies too quickly, shifting budgets into unproven technologies, and abandoning core SEO foundations that still drive the majority of revenue. Dismissal, on the other hand, occurs when executives see that the initial hype hasn’t immediately cratered their numbers, leading them to believe the threat is non-existent—until it’s too late to react. SEO leaders should advocate for a steadier, more nuanced view. Agentic AI is not a separate entity from search; it is an acceleration of trends that have been building for years. Personalized discovery, zero-click searches, and the need for high-quality structured data are not new concepts. Agents simply amplify these existing pressures. Explain to your executive team that the impact of agentic AI will be uneven. Standardized categories with clear data—such as electronics, office supplies, or basic apparel—will likely see agentic adoption much sooner. Complex, high-emotion, or highly regulated categories, like luxury goods or health-related products, will move more slowly because the “trust gap” for automation is much wider. This tiered approach allows the business to prioritize its response based on its specific product mix. For more on how the landscape is shifting, you can explore the discussion on whether we are ready for the agentic web. Change the conversation from rankings to eligibility For decades, the primary KPI for SEO has been “rankings.” If you were on the first page of Google, you were winning. In an agentic world, however, the concept of a “page of results” begins to dissolve. An agent doesn’t browse a list of ten blue links; it scans available data and selects the best option for its user. This means SEO leaders must shift the internal conversation from “ranking” to “eligibility.” The question is no longer “Where do we show up in the results?” but “Are we even eligible to be chosen by the agent?” Eligibility is built on three pillars: clarity, consistency, and trust. An agent needs to be able to ingest your data and understand exactly what you sell, what it costs, whether it is in stock, and who it is for. If your product information is fragmented, if your pricing is inconsistent across different platforms, or if your technical infrastructure is slow and unreliable, an agent will simply filter you out of the consideration set to avoid a “bad” experience for its human user. Framing SEO as an “eligibility engine” connects the technical work of the SEO team directly to commercial reality. It makes the case for investing in better product feeds, cleaner schema markup, and more robust APIs. If the business isn’t “readable” by a machine, it becomes invisible to the agentic web. Explain why SEO no longer sits only in marketing Traditionally, many C-suite executives have viewed SEO as a subset of the marketing department—a channel for driving traffic. Agentic AI shatters this silo. Because agentic selection depends on factors like stock accuracy, delivery speeds, and payment security, SEO becomes an operational and technical priority as much as a marketing one. SEO leaders need to be clear with leadership: “We cannot optimize for agents solely through content and keywords.” An agentic system might reject a brand because its shipping API is too slow, or because its return policy is buried in a non-indexable PDF. These are not traditional “marketing” problems; they are logistics, IT, and legal problems. Positioning SEO as a “connecting function” allows you to

Uncategorized

What repeated ChatGPT runs reveal about brand visibility

The Shift from Deterministic Search to Probabilistic AI For decades, search engine optimization (SEO) was built on a foundation of relative stability. While Google’s algorithms were—and still are—notoriously complex, a search query performed by two different users in the same location would generally yield very similar results. This deterministic nature allowed marketers to track rankings with a high degree of precision. If you were in the third position for “best accounting software” on Monday, you were likely there on Tuesday. The rise of Large Language Models (LLMs) like ChatGPT has completely disrupted this paradigm. We are moving away from the era of the static index and into the era of the probabilistic response. When you ask an AI a question, it doesn’t “look up” an answer; it generates one, token by token, based on mathematical probabilities. This means that if you ask ChatGPT the same question ten times, you are likely to get ten different responses. This inherent inconsistency raises a critical question for digital publishers and B2B marketers: If the AI is constantly changing its mind, how can we accurately measure brand visibility? New research into repeated ChatGPT runs provides a startling look at just how volatile these recommendations are and what it takes for a brand to achieve true dominance in the age of AI search. Understanding the Research: Methodology and Scope To understand the mechanics of AI brand visibility, it is essential to look at data derived from high-volume testing. Recent studies, including foundational work by Rand Fishkin at SparkToro, have highlighted that AIs are highly inconsistent when recommending products. Building upon that premise, a deeper dive into B2B-specific use cases was conducted to see if factors like category competitiveness or prompt complexity could stabilize these erratic responses. The methodology for this specific research involved a rigorous testing environment: The Prompt Set: 12 distinct prompts were developed, split between highly competitive B2B categories (like general accounting software) and niche categories (such as User Entity Behavior Analytics, or UEBA). Complexity Levels: The prompts were further divided into “simple” queries (e.g., “What is the best accounting software?”) and “nuanced” queries that included specific personas and pain points (e.g., “For a Head of Finance focused on ensuring financial reporting accuracy and compliance, what is the best accounting software?”). The Execution: Each of the 12 prompts was run 100 times through the logged-out, free version of ChatGPT. To ensure the results weren’t skewed by session history or IP tracking, a different IP address was used for each of the 1,200 interactions, simulating 1,200 unique users. The goal was to move past anecdotal evidence and determine the statistical likelihood of a brand appearing in a generative response. The findings reveal a landscape where visibility is much harder to maintain than many marketers realize. How Many Brands Does ChatGPT Actually Know? One of the first revelations from the data is the sheer volume of brands ChatGPT draws from when generating recommendations. Across 100 runs of a single prompt, ChatGPT mentioned an average of 44 different brands. However, this number fluctuates wildly depending on the industry. In some highly fragmented categories, the AI mentioned as many as 95 different brands over the course of 100 sessions. The Impact of Category Competitiveness The data shows a clear correlation between the maturity of a software category and the “bench depth” of ChatGPT’s recommendations. For competitive categories, the AI mentioned nearly twice as many brands per 100 responses compared to niche categories. This suggests that in crowded markets, ChatGPT’s probabilistic engine has a much wider net of “likely” candidates to choose from, making it significantly harder for any single brand to stand out consistently. The Nuance Paradox Interestingly, adding complexity to a prompt—such as specifying a persona or a use case—did not drastically narrow the field of brands mentioned. One might assume that a more specific request would lead to a more curated list of experts. Instead, ChatGPT mentioned only slightly fewer brands in response to nuanced prompts. For some categories, the number of brands actually increased when the prompt became more complex. This suggests that ChatGPT may not yet have a deep enough understanding of specific brand features to differentiate them based on sophisticated use cases. It knows a brand exists within a category, but it lacks the granular data to know if “Brand A” is truly better for a “Head of Finance” than “Brand B.” As a result, it falls back on its broader training data, leading to a similar rotation of names regardless of the persona provided. The Return of the ’10 Blue Links’ For years, the SEO industry joked about the “10 blue links” of the Google search results page. In a fascinating twist of digital evolution, ChatGPT seems to have adopted a similar constraint. On average, ChatGPT mentions approximately 10 brands in any single response. While the range can vary—from a minimum of 6 to a maximum of 15—the average remains remarkably consistent with traditional search formats. However, the difference lies in the rotation. While Google’s 10 links remain relatively static for a given query, ChatGPT’s 10 links are in a state of constant flux. In competitive categories, the AI draws from its deep bench, swapping brands in and out with every new conversation. This creates a “lottery effect” for brand visibility. Even if your brand is in the top 44 names the AI knows, your chance of appearing in any specific user’s session is only a fraction of the total. Why Rotation Matters for GEO This rotation is the primary challenge for Generative Engine Optimization (GEO). In traditional SEO, if you rank #3, you receive #3-level traffic consistently. In the world of AI search, if you are a “visible but not dominant” brand, you might appear in 20% of responses. This means 80% of potential customers never see your name, even though the AI “knows” who you are. This inconsistency makes it incredibly difficult to forecast lead generation or brand lift from AI platforms. The Winner’s Circle: Defining Dominant Brands

Uncategorized

Google Search Hits $63B, Details AI Mode Ad Tests via @sejournal, @MattGSouthern

Google’s Financial Resilience in the Age of Artificial Intelligence Google has once again demonstrated its dominance in the global advertising market, reporting that its Search revenue has climbed to a staggering $63 billion. This represents a 17% year-over-year growth, a figure that defies earlier analyst concerns that the rise of generative AI might cannibalize the company’s core business. Instead of retreating, Google is leaning into its technological shift, integrating artificial intelligence directly into the search experience and, more importantly, finding ways to monetize it. The latest financial disclosures reveal a company in transition—one that is successfully moving from a traditional index of links to a sophisticated, AI-driven answer engine. As Alphabet (Google’s parent company) navigates this evolution, the data suggests that users are not just accepting these changes; they are engaging with them at a much deeper level than previously seen in the history of the platform. The $63 Billion Milestone: Breaking Down the Numbers Achieving $63 billion in a single quarter for search revenue alone is a testament to the enduring power of Google’s ecosystem. The 17% growth rate is particularly notable because it occurs during a period of intense competition from new AI startups and a shifting regulatory landscape. This revenue surge is driven by several factors, including improved ad targeting, higher retail spending, and the initial rollout of AI-enhanced features that keep users within the Google environment for longer periods. For advertisers, these numbers signal stability. Despite the noise surrounding “AI search alternatives,” the vast majority of consumer intent still begins on Google. The company’s ability to grow its revenue by double digits suggests that its auction systems and ad delivery algorithms are becoming more efficient, extracting more value from every search query entered into the bar. Understanding AI Mode: A New Way to Search Central to Google’s future strategy is what is being termed “AI Mode.” This encompasses the suite of generative AI features, including AI Overviews (formerly known as Search Generative Experience or SGE), that provide synthesized answers to complex questions. Rather than presenting a list of websites for the user to visit, AI Mode gathers information from across the web and presents a cohesive summary directly on the Search Engine Results Page (SERP). The introduction of AI Mode represents the most significant UI/UX change in Google’s history. It shifts the user’s role from a “searcher” to a “conversationalist.” Users can ask follow-up questions, request specific formats for data, and explore topics with a level of nuance that traditional keyword searching never allowed. This shift is clearly resonating with a segment of the population that desires immediate, high-quality answers without the friction of clicking through multiple tabs. The 3x Engagement Factor: Why Users Are Lingering One of the most revealing statistics shared by Google is that queries handled in AI Mode run three times longer than traditional searches. In the world of digital publishing and advertising, “dwell time” is a critical metric. When a user spends three times as long on a search result, it indicates a significantly higher level of engagement and cognitive investment. Why are these sessions so much longer? There are several theories supported by early user data: Complexity of Queries: Users are likely using AI Mode for multifaceted questions that don’t have a single “right” answer, leading to more reading and interaction. Iterative Discovery: The conversational nature of AI allows users to refine their search in real-time. Instead of bouncing back to the search bar to type a new query, they are interacting with the AI’s response to dig deeper. Content Consumption: Because the AI provides a comprehensive overview, users are consuming more information directly on the Google page rather than navigating away immediately. For Google, this increased time on page is a goldmine. Every additional second a user spends interacting with an AI interface is an opportunity to serve a highly relevant advertisement or a product suggestion. Testing Ads in AI Mode: The Future of Monetization The most anticipated aspect of Google’s recent update is the confirmation that they are actively testing ad placements within AI Mode. For months, the SEO and PPC communities have wondered how Google would protect its massive revenue stream if users stopped clicking on traditional blue links. The answer is simple: bring the ads to the AI. Google is currently experimenting with several ad formats within the AI-generated summaries. These are not merely traditional side-bar ads; they are integrated into the “flow” of the AI’s response. For example, if a user asks for the best way to remove a stain from a couch, the AI might provide a step-by-step guide, while simultaneously displaying “sponsored” links for the specific cleaning products mentioned in the text. Key features of these AI Mode ad tests include: Contextual Relevance Ads are being triggered based on the specific nuances of the AI’s generated response, rather than just the initial keyword. This allows for a level of precision in targeting that was previously impossible. The ad becomes part of the “solution” provided by the AI. Native Integration Early tests show that ads are being placed above, below, and sometimes within the AI Overview box. By labeling these clearly as “Sponsored,” Google maintains its transparency standards while ensuring the ads are in the user’s direct line of sight. Shopping Integration For commercial queries, Google is leaning heavily into its Shopping Graph. If a user utilizes AI Mode to compare two different laptops, Google can inject real-time pricing, availability, and “Buy” buttons directly into the comparison table generated by the AI. The Strategic Shift for Advertisers and Brands The transition to an AI-first search engine means that advertisers must rethink their strategies. The $63 billion revenue figure suggests that the current system is working, but the shift to 3x longer query times in AI Mode means that the “top of the funnel” is changing. Brands can no longer rely solely on being the first organic link; they need to ensure their products and services are part of the data set that the

Uncategorized

5 Google Analytics Reports PPC Marketers Should Actually Use via @sejournal, @brookeosmundson

Introduction to Mastering Google Analytics for PPC In the modern digital marketing landscape, data is the bridge between a high-spending campaign and a high-performing one. For Pay-Per-Click (PPC) marketers, the Google Ads dashboard is often the primary workspace. However, relying solely on platform-specific data provides a fragmented view of the customer journey. To truly understand how paid traffic interacts with a brand, marketers must look beyond the click and dive into the post-click behavior captured by Google Analytics 4 (GA4). Google Analytics offers a holistic perspective that platform-specific tools cannot replicate. It allows you to see how paid users navigate your site, where they drop off, and how they interact with other marketing channels. By leveraging specific reports within GA4, PPC specialists can justify their ad spend, optimize their targeting strategies, and ultimately increase the return on investment (ROI) for their clients or organizations. The transition from Universal Analytics to GA4 has changed how we view metrics, shifting the focus toward events and engagement. For PPC professionals, this means learning to navigate a new set of reports designed to highlight user intent and attribution. Here are the five essential Google Analytics reports that every PPC marketer should be using to drive better results. 1. The Model Comparison and Conversion Path Reports One of the greatest challenges in PPC management is attribution. When a user clicks on a search ad, leaves the site, returns via an organic search three days later, and finally converts through a direct visit, who gets the credit? In the Google Ads interface, you might see “last-click” or “data-driven” attribution based only on Google Ads interactions. However, the Conversion Path report in GA4 reveals the entire multi-channel journey. Understanding the Multi-Touch Journey The Conversion Path report, located under the Advertising section, provides a visual representation of the touchpoints a user takes before completing a “Key Event” (formerly known as a conversion). For PPC marketers, this is vital for proving the value of top-of-funnel campaigns. You might find that your YouTube ads or Display campaigns rarely get the final click but appear in 40% of all conversion paths as an early touchpoint. Without this report, those campaigns might look like failures, leading to premature budget cuts. Using Model Comparison to Justify Spend The Model Comparison tool allows you to compare how different attribution models—such as Last Click vs. Data-Driven—distribute credit for conversions. By comparing these models, you can identify if your PPC efforts are being undervalued by traditional reporting. If a specific campaign shows a significantly higher conversion volume under a “First Click” model compared to a “Last Click” model, it proves that the campaign is a powerful discovery tool that initiates the customer relationship. 2. The Landing Page Report A PPC ad is only as good as the page it sends the user to. Even the most perfectly crafted ad copy cannot overcome a poor landing page experience. While Google Ads provides a “Landing Page Experience” score within its Quality Score metric, the Landing Page report in GA4 provides the actual behavioral data needed to diagnose conversion roadblocks. Analyzing Engagement Rate vs. Bounce Rate In GA4, “Bounce Rate” has been redefined, and the focus has shifted to “Engagement Rate.” For a PPC marketer, a low engagement rate on a specific landing page suggests a mismatch between the ad’s promise and the page’s content. By filtering this report to show only “Session Manual Source/Medium” (filtering for your paid search traffic), you can see exactly how users coming from your ads are behaving. Are they scrolling? Are they clicking on key elements? Or are they leaving within seconds? Optimizing for Quality Score Landing page performance directly impacts your Quality Score in Google Ads, which in turn determines your Cost Per Click (CPC) and ad rank. By using the Landing Page report to identify pages with low “Average Engagement Time,” you can prioritize which pages need technical fixes, better mobile optimization, or more compelling calls to action (CTAs). Improving these metrics in GA4 often leads to lower acquisition costs in your PPC campaigns. 3. User Demographics and Geographic Detail Reports Targeting the right audience is the cornerstone of PPC success. While Google Ads allows for demographic and geographic targeting, the data in GA4 is often more granular and reveals how these segments behave once they arrive on your site. This report is essential for fine-tuning your “negative” targeting—knowing who not to show your ads to. Identifying High-Value Segments By navigating to the User Attributes section, you can see reports based on City, Country, Age, Gender, and Interests. For a PPC marketer, the goal is to find the “pockets of profit.” For instance, you might find that while your ads are being served nationwide, users in three specific cities have a conversion rate that is double the national average. Conversely, you might find that a certain age group has a high click-through rate but zero conversions. Refining Geographic Bid Adjustments Armed with GA4 geographic data, you can return to Google Ads and implement bid adjustments. You can increase bids for high-converting regions to ensure maximum visibility and decrease bids (or exclude) regions that drain your budget without providing a return. This level of synchronization between GA4 behavior and Google Ads targeting is what separates elite marketers from the rest. 4. Google Search Console Integration Report PPC does not exist in a vacuum; it operates alongside Organic Search (SEO). One of the most powerful reports for a PPC marketer is actually found by linking Google Search Console (GSC) with GA4. This integration allows you to see the “Google Search Queries” report, which provides insight into the organic queries driving traffic to your site. Identifying Keyword Gaps By comparing your paid search terms with your organic search terms, you can find “gaps.” If your site is ranking organically on page three for a high-converting keyword, you need to increase your PPC presence for that term to capture the traffic you are missing. On the other hand, if you are ranking #1 organically

Uncategorized

Reddit says 80 million people now use its search weekly

Reddit says 80 million people now use its search weekly The New Frontier of Information Retrieval: Community as the Search Engine The landscape of digital discovery is undergoing a seismic shift, moving rapidly away from centralized, singular search results toward authentic, community-driven information. Nowhere is this transformation more evident than on Reddit, often dubbed “the front page of the internet.” The platform recently announced a staggering milestone: 80 million people are now utilizing Reddit search every single week. This monumental figure, disclosed during the company’s Q4 2025 earnings call, represents far more than just increased internal usage; it signifies Reddit’s emergence as a formidable, high-intent search engine in its own right. This dramatic uptake in weekly search activity, which jumped significantly from 60 million just a year prior, directly follows strategic internal changes, most notably the integration of its core keyword search functionality with its powerful, AI-driven tool, Reddit Answers. For digital marketers, publishers, and competitive intelligence analysts, this trajectory signals a crucial change in visibility strategy. Reddit is no longer merely a source of backlinks or anecdotal discussions; it is now a destination where users initiate, execute, and complete critical research tasks, often bypassing traditional search engines like Google entirely. The implication is clear: visibility and authority on Reddit are rapidly becoming just as essential as ranking well in traditional organic search results. The Strategic Integration: Unifying Search and AI Answers The exponential growth in search usage is directly attributable to Reddit’s focused effort to streamline and enhance its discovery tools. The key innovation has been the unification of disparate search functions into a single, cohesive experience. Merging Keyword Search with Generative AI During the Q4 2025 call, CEO Steve Huffman highlighted the “significant progress” made by combining standard keyword search with Reddit Answers, the platform’s bespoke AI-driven Q&A feature. Users now navigate a unified interface, allowing them to fluidly transition between classic search results—listing relevant subreddits and posts—and sophisticated, AI-generated summaries derived from those community discussions. Furthermore, these AI Answers are now often featured directly within the standard search results page, providing instant gratification for complex queries. This strategic move addresses a core behavioral change observed across the internet: people are increasingly seeking nuanced perspectives and real-world experiences when making decisions, particularly concerning products, services, and entertainment. Instead of simply wanting a definition or a single factual answer, users want to understand the consensus, the trade-offs, and the authentic discussions surrounding a topic. Becoming an End-to-End Search Destination The ambition articulated by Huffman is for Reddit to evolve from a platform where people *go to find things* into an “end-to-end search destination.” This means capturing the full user intent journey. Rather than functioning primarily as a middleman that sends users elsewhere via external links, Reddit is betting that by providing superior, community-vetted answers—supported by AI summation—it can retain user traffic and monetize that high-intent activity directly on the platform. This shift means the platform is proactively positioning itself to intercept high-value queries. When users are researching “what is the best monitor for competitive gaming?” or “how to start investing in crypto,” Reddit wants to be the primary, definitive source of information, leveraging the collective wisdom of millions of niche communities. Reddit Answers: The Driving Force of Engagement The surge to 80 million weekly search users has been heavily propelled by the adoption and success of Reddit Answers. This generative AI component transforms raw community data into digestible, actionable insights. A Tectonic Shift in Query Volume The statistics surrounding Reddit Answers are compelling proof of concept. Queries directed specifically toward the Answers feature skyrocketed from approximately 1 million a year ago to a substantial 15 million in Q4 2025. This parallel increase in both general search usage and specific AI query volume demonstrates that users are actively seeking out the blended search experience offered by the platform. Reddit Answers excels in areas where subjective opinion, comparison, and diverse perspectives are valued over a single factual truth. Huffman noted that the feature performs strongest for open-ended questions—those requiring guidance on what to buy, watch, or try next. These are precisely the types of questions that drive pre-purchase research and high-value consumer decisions, making them extremely valuable from a monetization standpoint. A community-vetted answer about the pros and cons of three different laptops, drawn from thousands of user reviews, carries immense authority compared to an answer derived purely from corporate marketing materials. Expanding Beyond Text: Dynamic and Agentic Results The innovation behind Reddit Answers is not stagnating. The company is actively piloting advancements to make the search experience more immersive and interactive. Huffman mentioned testing “dynamic agentic search results” that incorporate various media formats beyond simple text summaries. This movement suggests that future Reddit search results will likely include interactive elements, short video clips, embedded images, and other rich media directly within the answer summaries. This approach not only caters to contemporary consumption habits but also paves the way for increasingly sophisticated advertising opportunities that blend seamlessly into the search experience, particularly in verticals like gaming, electronics, and finance. Understanding High-Intent User Behavior COO Jennifer Wong elaborated on the distinct nature of search behavior observed on Reddit. She characterized the activity as “incremental and additive” to existing user engagement, but crucially, often tethered to high-intent moments. The Value of Comparison and Research Unlike passive scrolling through feeds, search activity on Reddit is inherently active and goal-oriented. Users are coming to the platform specifically to research purchases, compare alternatives, troubleshoot technical issues, or validate opinions before committing to a decision. This high-intent search behavior is exceptionally attractive to advertisers. A user typing a comparative query—”Nvidia RTX 4070 vs AMD RX 7800 XT”—is a qualified lead deep within the purchasing funnel. By capturing and satisfying this intent on-platform, Reddit provides a highly contextualized environment for advertising that is often difficult to replicate on purely algorithmic feed platforms. UI/UX Changes to Prioritize Discovery To capitalize on this growing user behavior, the company is refining its application layout and user interface (UI/UX). Huffman confirmed

Scroll to Top