Uncategorized

Uncategorized

How To Track AI Visibility & Prompts The Right Way via @sejournal, @lorenbaker

The digital marketing landscape is undergoing a tectonic shift. For decades, Search Engine Optimization (SEO) was a relatively straightforward game of keywords, backlinks, and technical health. However, with the rise of Large Language Models (LLMs) and AI-integrated search engines like Google’s Search Generative Experience (SGE), Bing Chat, Perplexity, and OpenAI’s SearchGPT, the rules have changed. It is no longer enough to track which position your website holds on a traditional Search Engine Results Page (SERP). Today, the most critical metric for forward-thinking brands is AI visibility. Understanding how AI models perceive your brand and how often they cite your content in response to user prompts is the next frontier of digital strategy. Tracking AI visibility and prompts allows marketers to move beyond simple rankings and into the realm of influence. To succeed in this new era, you must learn how to monitor, analyze, and optimize your presence within these black-box systems. The Evolution from Keywords to Prompts In traditional search, users enter short, fragmented queries like “best laptop 2024.” In the AI era, user behavior is shifting toward natural language prompts. A user might now ask, “I am a graphic designer looking for a lightweight laptop under $1,500 with a long battery life; what are my best options?” This shift from keywords to complex prompts changes everything for search professionals. Prompts are more conversational, specific, and intent-driven. Because they are more detailed, the responses generated by AI are highly personalized. If you aren’t tracking how AI models handle these specific prompts, you are missing out on a massive segment of the “search” journey. Tracking prompts means understanding the context in which your brand is being mentioned—or why it is being ignored. What is AI Visibility? AI visibility refers to the frequency and prominence with which your brand, product, or content appears in AI-generated responses. Unlike the traditional “10 blue links,” AI visibility is often bundled into a narrative. An AI might summarize three different articles to answer a user’s question. If your content provides the core facts for that summary, you have high visibility, even if the user never clicks through to your site. Tracking this visibility is essential for several reasons. First, it helps you understand your “Share of Model.” Much like Share of Voice, this tells you how much of the AI’s “mindshare” you own compared to competitors. Second, it identifies gaps in your content strategy. If an AI provides an answer that is factually incorrect about your brand or omits you entirely, it indicates a lack of authoritative data available for the model to ingest. Establishing a Framework for Tracking AI Prompts To track AI prompts effectively, you cannot rely on the same tools you use for Google Search Console. You need a specialized framework that accounts for the non-linear nature of AI interactions. Here is how to build that framework from the ground up. 1. Identify Your Core Prompt Categories Start by categorizing the types of prompts your target audience is likely to use. These generally fall into three buckets: Informational Prompts: Users asking for explanations, “how-to” guides, or definitions. (e.g., “How does cloud computing work?”) Comparative Prompts: Users weighing two or more options. (e.g., “Compare the iPhone 15 Pro vs. Samsung S24 Ultra.”) Transactional/Actionable Prompts: Users looking for a specific recommendation or a path to purchase. (e.g., “Find me a hotel in New York with a gym and free breakfast.”) By categorizing prompts, you can track which areas your brand excels in and where you are losing ground to competitors. 2. Monitoring Citation and Attribution One of the most valuable forms of AI visibility is the citation. When an AI model like Perplexity or SGE provides a source link, it is a direct endorsement of your authority. Tracking how often you are cited—and for which topics—is the new version of backlink monitoring. You should look for: Direct links to your articles. Brand mentions within the text (even without a link). The sentiment of the mention (positive, neutral, or negative). 3. Analyzing Answer Accuracy AI models are prone to hallucinations. Tracking prompts allows you to see if the AI is presenting your brand accurately. If you find that an LLM is consistently misrepresenting your pricing, features, or company history, you need to investigate your structured data and the clarity of your on-site content to ensure the model is “learning” the correct information. Tools and Methodologies for Measuring AI Presence Since this is a relatively new field, the tooling is still evolving. However, there are several ways to gather data on your AI visibility today. Manual “Secret Shopper” Testing The most basic way to track visibility is to manually interact with various AI models. Create a spreadsheet of your most important “money prompts” and run them through ChatGPT, Claude, Gemini, and Bing. Document whether your brand is mentioned, where the AI is getting its information, and the tone of the response. While time-consuming, this provides qualitative insights that automated tools might miss. Automated AI Tracking Platforms Newer SEO platforms are beginning to offer AI tracking modules. These tools simulate thousands of prompts and aggregate the data to show you your “AI Rank.” They can identify which pages are being used as sources most frequently and highlight when a competitor suddenly gains visibility in a specific niche. Analyzing Referral Traffic While some AI platforms do not pass through clear referral data, many do. Keep a close eye on your analytics for traffic coming from “openai.com,” “perplexity.ai,” or “google.com” (specifically looking for SGE-driven clicks). A spike in traffic from these sources indicates that your content is successfully triggering AI citations. The Importance of Contextual Prompt Engineering To track the “right way,” you must think like a prompt engineer. When testing your visibility, don’t just use one variation of a question. The way a prompt is phrased can significantly alter the AI’s output. This is known as “prompt sensitivity.” For example, if you are a SaaS company, track prompts like: “What is the best CRM for small businesses?” “Which

Uncategorized

AI Search Barely Cites Syndicated News Or Press Releases via @sejournal, @MattGSouthern

The digital marketing landscape is currently undergoing its most significant transformation since the invention of the search engine itself. As Artificial Intelligence (AI) begins to dominate how users find information, the traditional metrics of success—keyword rankings and backlink volume—are being replaced by a new, more elusive metric: the AI citation. For years, public relations professionals and SEO specialists have relied on syndication as a cornerstone of their strategy. The idea was simple: distribute a press release to hundreds of news outlets, gain a massive footprint of backlinks, and watch the authority of a brand grow. However, recent data suggests that in the age of AI search, this strategy is not just outdated; it is largely invisible. A comprehensive analysis of over four million AI search citations reveals a stark reality for digital marketers. Syndicated press releases, once the gold standard for broad distribution, barely register in the answers provided by AI search engines like Perplexity, Google’s AI Overviews, and ChatGPT. Instead, these platforms are showing a heavy preference for original editorial content and well-maintained, brand-owned newsrooms. This shift signals a fundamental change in how information must be packaged and published to survive the transition from traditional search to generative AI discovery. The Data Behind the Disconnect The study, which examined four million citations across various AI-driven search platforms, provides a granular look at what LLMs (Large Language Models) deem “worthy” of being cited. The findings indicate that while a press release might be picked up by 500 local news sites, the AI model typically identifies the content as duplicate information. Because AI models are designed to provide the most concise and authoritative answer possible, they have no reason to cite 500 identical versions of a story. They seek the primary source or the most comprehensive editorial analysis of that source. In the hierarchy of AI citations, syndicated content sits at the very bottom. The data shows that the “long tail” of syndication—those dozens or hundreds of small, automated news sites that republish wire service content—contributes almost zero visibility in AI-generated answers. This is a massive wake-up call for companies that have historically measured the success of a PR campaign by the number of “placements” achieved through wire services. Why AI Search Prefers Editorial Over Syndication To understand why AI search engines are snubbing syndicated news, we have to look at how these models are trained and how they retrieve information. AI search isn’t just looking for keywords; it is looking for “information gain.” Information gain is a concept where a piece of content provides new, unique, or more detailed information that wasn’t available in other sources. The Problem of Duplicate Content Syndicated press releases are, by definition, duplicate content. When a wire service blasts a release to 300 different domains, the text remains identical across all of them. For a traditional search engine like Google, canonical tags and sophisticated algorithms have long been used to filter out this noise. For an AI search engine, the goal is even more focused: find the single most authoritative version of a fact. If an AI model sees the same text on 300 sites, it will likely ignore 299 of them. If the original source is a generic PR wire, the AI may skip it entirely in favor of an editorial piece that adds context, expert quotes, and analysis. The Value of Context and Analysis Editorial content—written by journalists, industry experts, or specialized bloggers—fares much better in AI citations because it provides context. A press release might announce a new product, but an editorial piece explains how that product fits into the current market, compares it to competitors, and discusses its potential impact. AI models thrive on this connective tissue. They are designed to answer “why” and “how,” not just “what.” Because editorial content is unique and provides a narrative, it offers the “information gain” that LLMs prioritize when building a response for a user. The Rise of the Owned Newsroom One of the most interesting takeaways from the 4-million-citation study is the resilience of “owned newsrooms.” While syndicated versions of news fail, the original source published on a company’s own domain often manages to secure a citation. This highlights the growing importance of brand authority and the “source of truth.” When a company publishes an official statement, a white paper, or a detailed case study on its own “News” or “Insights” section, AI search engines recognize that domain as the primary source. This is particularly true if the brand has established E-E-A-T (Experience, Expertise, Authoritativeness, and Trust). In the eyes of an AI, citing the company that actually created the news is more logical than citing a third-party aggregator that simply republished it. Building a Newsroom for the AI Era For brands to capture AI search traffic, they must pivot from being “distributors” to being “publishers.” An AI-friendly newsroom is not just a list of PDFs or dry corporate announcements. It should include: Unique Data: AI models love statistics and original research. Publishing proprietary data is one of the fastest ways to earn a citation. Expert Perspectives: Content that includes quotes and insights from identifiable experts helps satisfy the “Expertise” component of E-E-A-T, which AI models use to weight sources. Structured Data: Using Schema markup helps AI crawlers understand the context of the news, the entities involved, and the date of publication. Comprehensive Coverage: Rather than a short 400-word blast, high-performing newsrooms publish deep dives that cover a topic from multiple angles. The Impact on Digital PR and SEO Strategy The revelation that syndicated news is ignored by AI search necessitates a total overhaul of digital PR strategies. For years, the industry has been incentivized to focus on volume. Agencies would report to clients that a story was “covered” by hundreds of outlets, even if those outlets were just automated subdomains of local news stations. In an AI-first world, this metric is a vanity metric with zero ROI. From Links to Citations In traditional SEO, a link from a syndicated site might

Uncategorized

Walmart: ChatGPT checkout converted 3x worse than website

The Reality Check for Agentic Commerce For the past year, the tech world has been buzzing with the promise of “agentic commerce”—a future where artificial intelligence doesn’t just suggest products but actually handles the entire transaction for you. The vision was simple: you tell ChatGPT you need ingredients for a dinner party or a new set of power tools, and the AI handles the search, the selection, and the checkout without you ever leaving the chat interface. However, recent data from Walmart, the world’s largest retailer, suggests that we are much further from that reality than many anticipated. In a revealing disclosure, Walmart confirmed that conversion rates for purchases made directly inside ChatGPT were three times lower than when users were directed to Walmart’s own website. This massive gap in performance highlights a critical friction point in the evolution of AI-driven shopping. While AI is excellent at discovery and curation, it is currently struggling to close the deal. For marketers, SEO professionals, and e-commerce platform owners, Walmart’s experience serves as a vital case study in why the “owned environment” still reigns supreme in the digital economy. Inside the Experiment: Walmart and OpenAI’s Instant Checkout The experiment began in earnest in November, when Walmart partnered with OpenAI to pilot a feature known as “Instant Checkout.” The initiative offered roughly 200,000 products that could be purchased natively within the ChatGPT interface. The goal was to remove the friction of jumping between apps and websites, creating a seamless “conversational” shopping experience. On paper, it seemed like a win-win. OpenAI could demonstrate the utility of its ecosystem for commerce, and Walmart could reach tech-forward consumers exactly where they were spending their time. However, the results were far from the revolutionary breakthrough both companies hoped for. Daniel Danker, Walmart’s Executive Vice President of Product and Design, did not mince words when describing the outcome. He noted that the in-chat purchases converted at only one-third the rate of traditional click-out transactions. More tellingly, Danker described the native AI checkout experience as “unsatisfying.” Why Instant Checkout Failed to Convert To understand why a 3x difference in conversion exists, we have to look at the psychology of the modern shopper and the technical limitations of current LLM (Large Language Model) interfaces. 1. The Lack of Visual Richness E-commerce is a visual medium. When a user visits Walmart.com, they are greeted with high-resolution images, video demonstrations, 360-degree product views, and detailed size charts. ChatGPT, by its nature, is a text-heavy interface. While it can display images, the rich, interactive experience of a dedicated retail site is difficult to replicate in a scrolling chat window. Shoppers often need that final visual confirmation before hitting “buy,” a step that feels less certain inside a third-party AI tool. 2. The Trust and Security Gap Entering credit card information and personal shipping details into a chatbot feels fundamentally different from doing so on a brand’s official website. Despite the security protocols in place, there is a lingering “trust gap” when it comes to agentic commerce. Consumers are comfortable asking ChatGPT for a recipe or a summary of a news article, but trusting it to handle a financial transaction with a third-party retailer introduces a new layer of hesitation. 3. Missing Social Proof and Nuance Walmart’s website is optimized for conversion through social proof—reviews, ratings, and “customers also bought” suggestions. While an AI can summarize reviews, the raw data of seeing thousands of verified purchases and reading specific user feedback provides a level of reassurance that a summarized AI response lacks. If the AI says, “This is a highly-rated drill,” it carries less weight than seeing 5,000 four-star reviews on the product page itself. The Death of OpenAI’s Instant Checkout The disappointing results from the Walmart partnership have had immediate consequences for OpenAI’s product roadmap. Earlier this month, OpenAI confirmed that it is phasing out the “Instant Checkout” feature entirely. This pivot marks a significant shift in how AI labs view commerce. Rather than trying to be the “everything store” that manages transactions internally, OpenAI is moving toward a model where the AI acts as a sophisticated lead generator, handing the final transaction back to the merchant. This is a victory for the traditional web and for brand-owned platforms. It suggests that for the foreseeable future, the “buy” button belongs on the retailer’s site, not in the LLM’s sidebar. Enter Sparky: Walmart’s New Strategy for AI Integration Walmart isn’t abandoning AI; it is simply changing how it integrates with it. The company is moving away from native ChatGPT checkouts and toward an “embedded” model. This involves the deployment of “Sparky,” Walmart’s own proprietary AI shopping assistant. Instead of a generic OpenAI checkout process, Sparky will be embedded within the ChatGPT ecosystem. This new approach changes the dynamic in several key ways: Syncing the Shopping Experience One of the biggest frustrations with the previous model was the lack of continuity. In the new version, users will log into their Walmart accounts through the interface. This allows for cart syncing across platforms. If you add an item to your cart via a conversation in ChatGPT, it will appear in your Walmart app and on the Walmart website. This creates a “persistent cart” that bridges the gap between AI discovery and traditional checkout. Merchant-Handled Transactions By moving the checkout back into Walmart’s system—even if it is triggered from within ChatGPT—Walmart regains control over the user experience. They can ensure that shipping options, loyalty points (like Walmart+), and promotional offers are applied correctly. This “app-based checkout” model is what OpenAI is now favoring for all its merchant partners. Multi-Platform Presence Walmart isn’t putting all its eggs in the OpenAI basket. The company confirmed that a similar integration with Google Gemini is slated for next month. By treating AI platforms as distribution channels rather than transaction hubs, Walmart is positioning itself to be present wherever the consumer starts their search journey. What This Means for SEO and Digital Marketing The Walmart/OpenAI data is a wake-up call for the

Uncategorized

Perplexity’s Comet for iOS uses Google Search by default

The Evolution of Perplexity: From Answer Engine to Full-Scale Browser In the rapidly shifting landscape of artificial intelligence, Perplexity has carved out a unique niche as the “answer engine” of choice for power users. However, the company is no longer content with being a simple destination for queries. With the launch of Comet for iOS, Perplexity is moving directly into the territory occupied by Safari and Google Chrome. Comet is not just an application with a search bar; it is a fully realized mobile browser designed to integrate large language models (LLMs) into the fabric of the daily browsing experience. The most striking aspect of this release is the strategic partnership—or rather, the technical reliance—on its primary competitor. Perplexity has confirmed that Comet for iOS uses Google Search as its default engine. For many in the tech industry, this seems like a tactical retreat, but a closer look at the mechanics of mobile search reveals a calculated move toward pragmatism over idealism. By leveraging Google’s established infrastructure for traditional queries while overlaying its own sophisticated AI assistant, Perplexity is attempting to create a “hybrid” browsing model that offers the best of both worlds. Why Comet Defaults to Google Search The decision to set Google as the default search provider within Comet was not made lightly. Aravind Srinivas, the CEO of Perplexity, has been transparent about the reasoning behind this choice. He notes that mobile queries are fundamentally different from desktop queries. When users are on their phones, they are often looking for immediate, actionable, and location-dependent information. These are categories where traditional search engines still hold a massive advantage over generative AI. Specifically, Google excels in three key areas that current LLMs struggle to replicate with high precision: navigation, local search, and transactional intent. If a user searches for “best coffee shop near me” or “track my UPS package,” Google’s massive database and real-time indexing provide an instant, accurate result. Perplexity’s AI, while excellent at synthesizing complex information, can sometimes struggle with the latency and hyper-local accuracy required for these “right now” moments. By using Google as the backbone for these types of queries, Comet ensures that users do not experience a drop in quality when switching from Safari or Chrome. It allows the browser to remain fast and reliable for everyday tasks while saving the “heavy lifting” of AI processing for queries that actually require intelligence and synthesis. The Hybrid Search Experience: How Comet Works Comet is designed to bridge the gap between the “old” web and the “new” AI-driven web. The interface provides traditional search engine results pages (SERPs) for fast, high-intent queries. If you search for a stock price or a weather forecast, Comet serves those results via Google’s engine. However, the Perplexity Assistant is always present, ready to layer advanced intelligence over the standard web experience. This hybrid approach addresses one of the biggest friction points in AI search: speed. Generative AI models take time to process and output text. For a user who just wants to find a website’s login page, waiting five seconds for an AI to write a paragraph is an annoyance. Comet solves this by defaulting to the “fast” path for simple lookups and offering the “deep” path for research and complex questions. The Role of the Perplexity Assistant Within the Comet environment, the Perplexity Assistant acts as a digital companion that lives inside the browser. It isn’t just a chatbot tucked away in a menu; it is integrated into the browsing flow. Users can summon the assistant to interact with the page they are currently viewing. For example, if you are reading a long-form investigative article, you can ask the assistant to summarize the key points or explain a specific concept mentioned in the third paragraph. The assistant can also take actions on your behalf. Perplexity has touted the browser’s ability to help with form fills, draft emails based on page content, and even assist with bookings. This moves the browser from a passive viewing tool to an active productivity agent, aligning with the broader industry trend of “AI agents” that can execute tasks rather than just provide information. Key Features of Comet for iOS Comet arrives with a suite of features that differentiate it from standard mobile browsers. These features are built on the premise that a browser should be more than a window to the web; it should be an intelligence tool. Voice-Enabled Browsing On mobile, typing can be a hurdle. Comet emphasizes voice interaction, allowing users to ask complex questions while they browse. This isn’t just basic voice-to-text; the system is designed to understand context. You can ask follow-up questions about a site you are currently visiting without having to re-specify the subject, making the experience feel more like a conversation and less like a series of disjointed searches. Deep Research and Cited Summaries One of Perplexity’s flagship features is “Deep Research,” which has been ported over to the Comet browser. When a user initiates a research task, the AI doesn’t just look at one source. It crawls multiple tabs, analyzes various perspectives, and generates a comprehensive summary with citations. This is particularly useful for students, professionals, and researchers who need to get up to speed on a topic quickly without manually clicking through twenty different Google results. Cross-Tab Synthesis One of the most innovative features of Comet is its ability to research across tabs. Traditional browsers treat tabs as silos—information in tab A has no relationship to information in tab B. Comet’s assistant can look across your open tabs to find connections, summarize common themes, or help you compare products across different retail sites. This is a significant leap forward in mobile productivity. SEO Implications: A New Era for Digital Marketers The launch of Comet and its reliance on Google Search creates a complex new environment for SEO professionals and digital marketers. For years, the industry has speculated that AI search would kill traditional SEO. However, Perplexity’s decision to use Google as a default suggests that

Uncategorized

Microsoft Advertising simplifies automated bidding setup

The Evolution of Bidding in Microsoft Advertising The digital advertising landscape is undergoing a significant transformation, driven largely by the rapid advancement of machine learning and artificial intelligence. Microsoft Advertising, a key player in the search and native advertising space, is staying at the forefront of this evolution by refining its platform to be more intuitive and efficient. Recently, Microsoft announced a strategic shift in how advertisers configure automated bidding, moving away from fragmented settings toward a more consolidated, goal-oriented framework. This update is not merely a cosmetic change to the user interface. It represents a fundamental philosophy in modern digital marketing: reducing manual complexity so that advertisers can focus on high-level strategy while the platform’s algorithms handle the granular execution. By folding familiar targets like Target CPA (Cost Per Acquisition) and Target ROAS (Return on Ad Spend) into broader automated strategies, Microsoft is streamlining the campaign creation process without sacrificing the power of its optimization engines. Simplifying the Automated Bidding Experience Historically, advertisers on Microsoft Advertising—and indeed many other platforms—faced an array of bidding options that could often feel redundant or confusing. You might have had to choose between “Maximize Conversions” and “Target CPA” as if they were entirely different animals. In reality, these strategies share a common goal: driving as many conversions as possible within specific parameters. Under the new simplified setup, Microsoft is consolidating these options into two core pillars based on the advertiser’s primary objective: 1. Maximize Conversions For advertisers whose primary goal is volume—generating the highest number of leads, sign-ups, or sales within a given budget—the “Maximize Conversions” strategy is the foundation. However, Microsoft recognizes that volume often needs a safety net. Therefore, Target CPA (tCPA) is now an optional layer within the Maximize Conversions framework. Instead of selecting tCPA as a standalone strategy, you simply choose Maximize Conversions and, if desired, input your target cost per acquisition. 2. Maximize Conversion Value For e-commerce businesses or service providers where not all conversions are equal, “Maximize Conversion Value” is the go-to approach. This strategy focuses on the total revenue or “value” generated by the campaign rather than just the raw count of conversions. Just as with the conversion-focused model, Target ROAS (tROAS) has been integrated as an optional setting. Advertisers can now select Maximize Conversion Value and define a specific return on ad spend goal within that selection. The Technical Logic: What Has (and Hasn’t) Changed? A common concern among seasoned PPC (Pay-Per-Click) managers when platforms “simplify” things is whether they are losing control or if the underlying algorithm is being altered. Microsoft has been clear on this front: the underlying bidding behavior remains exactly the same. The mathematical models, the data signals used (such as device, location, time of day, and intent), and the way the system bids in real-time auctions have not changed. The update is strictly focused on the configuration experience. By grouping these settings, Microsoft is ensuring that advertisers are thinking about their goals in a more structured way. If your goal is conversions, you start with the conversion strategy. If you have a specific price point you need to hit to remain profitable, you add the tCPA target. This hierarchy makes logical sense and aligns with how modern AI-driven bidding works best—by giving the machine a clear objective and a boundary to work within. Why Microsoft Is Making This Move Now The move toward simplification is part of a broader industry trend toward “standardization.” Google Ads made similar changes to its bidding structure several years ago, and by aligning its interface with industry standards, Microsoft makes it significantly easier for multi-platform advertisers to manage their campaigns. Here are several reasons why this shift is beneficial for the ecosystem: Reducing the Barrier to Entry For small business owners or new digital marketers, the sheer number of bidding options in a modern ad platform can be overwhelming. “Should I use Target CPA or Maximize Conversions?” is a common question that often leads to analysis paralysis. By presenting two clear paths—Conversions or Value—Microsoft lowers the barrier to entry, allowing users to get campaigns up and running faster and with more confidence. Consistency Across Accounts For agencies managing dozens or hundreds of accounts, consistency is key to efficiency. This update ensures that the setup process is uniform across all campaigns. It reduces the likelihood of human error where one campaign might be set to a legacy standalone tCPA setting while another is using a newer automated strategy, leading to fragmented reporting and optimization workflows. Focus on Machine Learning Efficiency Automated bidding thrives on data. By consolidating these strategies, Microsoft can potentially gather and process performance data more effectively across its network. When the system knows that a “Maximize Conversions” campaign with a target is fundamentally trying to achieve the same thing as one without a target (just with more constraints), it can apply its learnings more broadly, leading to faster “learning phases” for new campaigns. Practical Implications for Advertisers If you are currently managing Microsoft Advertising campaigns, you might be wondering how this affects your daily routine. The good news is that the transition is designed to be seamless. No Disruption to Existing Campaigns Microsoft has confirmed that any existing campaigns currently using Target CPA or Target ROAS as standalone settings will continue to run without interruption. You do not need to go in and manually update your current campaigns. They will maintain their performance goals and bidding logic. However, when you go to create a new campaign, you will see the new streamlined interface. Portfolio Bid Strategies Remain Intact For advanced advertisers who use Portfolio Bid Strategies to manage multiple campaigns under a single bidding goal, there is no change. These remain a powerful way to aggregate data across different campaign structures to fuel the bidding algorithm, and Microsoft is keeping this functionality as it is. Optionality Provides Continued Control It is important to emphasize that while the setup is simpler, the control is still there. Setting an optional Target CPA or Target

Uncategorized

Google expands its Universal Commerce Protocol to power AI-driven shopping

The Evolution of E-Commerce: From Search Queries to Autonomous Agents The landscape of digital commerce is undergoing a fundamental transformation. For decades, the process of online shopping has remained largely unchanged: a user types a query into a search engine, clicks through various links, compares prices manually, adds items to a cart, and navigates a checkout flow. However, Google is currently building the infrastructure to move beyond this manual process. By expanding its Universal Commerce Protocol (UCP), Google is laying the groundwork for what industry experts call “agentic commerce.” Agentic commerce refers to a future where AI agents—powered by large language models like Google Gemini—don’t just find products but actually perform the labor of shopping. These agents can evaluate reviews, compare technical specifications, apply discounts, and execute purchases on behalf of the user. To make this a reality, a bridge is needed between the AI’s reasoning capabilities and the retailer’s technical backend. That bridge is the Universal Commerce Protocol. Google’s latest updates to UCP represent a significant leap forward in making AI-driven shopping functional, scalable, and personalized. By introducing new cart capabilities, real-time catalog access, and identity linking, Google is ensuring that the transition from human-led browsing to agent-led buying is seamless for both the consumer and the merchant. What is the Universal Commerce Protocol (UCP)? The Universal Commerce Protocol is an open standard designed to streamline how retailers share data with AI platforms. In the past, every merchant had their own unique way of handling carts, inventory, and user accounts. For an AI agent to interact with thousands of different websites, it would traditionally need to “scrape” those sites, a process that is often slow, error-prone, and fragile. UCP solves this by providing a modular, standardized language. When a retailer adopts UCP, they are essentially providing a roadmap that an AI agent can read. This allows the agent to understand exactly how to add an item to a basket, how to check if a specific size is in stock, and how to apply a user’s loyalty rewards without a human ever having to click a button. This shift from “reading” a website to “interfacing” with a protocol is what will define the next decade of SEO and digital retail. New Features: Empowering the Next Generation of AI Agents Google’s recent expansion of the protocol introduces three critical features that address the most common friction points in automated shopping. These updates move the needle from simple product discovery to complex, multi-step transactions. Advanced Cart Capability One of the primary limitations of early AI shopping experiments was the “one-and-done” nature of the interaction. An agent might be able to find a single pair of shoes and send the user to a checkout page, but it struggled with the complexity of building a full shopping basket. The new cart capability allows agents to add or save multiple products from a single retailer in one go. This mirrors the way humans actually shop. A consumer rarely visits a grocery or electronics site for a single item; they build a list. With this update, a user could tell Gemini, “I’m planning a camping trip; find me a four-person tent, a portable stove, and two sleeping bags from a reputable outdoor brand.” The AI agent can now assemble that entire “basket” within the UCP framework, allowing the user to review the final total and check out in a single step. Real-Time Catalog Integration In e-commerce, data freshness is everything. There is nothing more frustrating for a consumer than being told an item is in stock by an AI, only to find it sold out upon reaching the checkout page. The UCP catalog feature gives agents direct access to real-time product data, including pricing, inventory levels, and specific product variants like color or size. This real-time link ensures that the AI agent is acting on the most current information available. It also allows the agent to handle more nuanced queries. Instead of just finding “a blue shirt,” the agent can confirm that the “Navy Blue Performance Polo” is available in “Large” at the “Downtown Seattle” location for a specific price. This level of accuracy is vital for building consumer trust in AI-led commerce. Identity Linking and Loyalty Preservation For retailers, the most valuable customers are those in their loyalty programs. Historically, shopping through third-party aggregators or search engines meant that these “logged-in” benefits were often lost. A customer might have a 10% member discount or qualify for free shipping, but if an AI agent is handling the search, those perks might not be applied. The new identity linking feature in UCP solves this problem. It allows shoppers to carry over their authenticated status to platforms connected through the protocol. This means that when an agent shops on behalf of a user, it does so using the user’s established profile. Member-only pricing, accumulated rewards points, and saved shipping preferences remain intact. This feature is a win-win: retailers maintain their direct relationship with the customer, and customers get the best possible deal without having to manually log in to every site they visit. The Strategic Importance for SEO and Digital Marketing For digital marketers and SEO professionals, the expansion of UCP signals a shift in priorities. While traditional organic ranking factors like backlinks and keyword density still matter, “data quality” is becoming the new gold standard. If an AI agent cannot verify your inventory or understand your pricing through a protocol like UCP, you effectively do not exist in the “agentic” search results. Visibility in the Age of Gemini Google has made it clear that these UCP capabilities will be integrated directly into its own ecosystem, specifically within Google Search and the Gemini app. As more users turn to Gemini for “help me buy” tasks, the products that show up will be those backed by robust, protocol-compliant data. This means that a retailer’s Merchant Center feed is no longer just a tool for Google Shopping ads; it is the fundamental data source for the AI agents that

Uncategorized

What patents reveal about the foundations of AI search

Every time a new large language model (LLM) is released or Google rolls out a significant update to its AI Overviews, the SEO industry tends to react with a mix of panic and excitement. We often witness a form of collective amnesia, where professionals scramble to optimize for “new” features that were actually outlined in patent offices over a decade ago. We become so fixated on the immediate future that we forget to look at the historical blueprints that describe exactly how these systems are built to function. To succeed in the landscape of 2026 and beyond, the most effective strategy isn’t just to be a futurist; it is to be an archaeologist. Understanding the foundations of AI search requires digging into the technical filings that preceded the current era of generative AI. By looking back at foundational patents, we can understand the long-standing rules of the game, and by looking ahead, we can see how modern computing power is finally allowing search engines to enforce those rules at scale. The archaeology of SEO: Why history repeats in search There is a persistent misconception that mastering AI search requires becoming a master prompt engineer or staying awake 24/7 to read every research paper from OpenAI or Anthropic. While staying current is helpful, the underlying logic governing today’s search “magic” is often based on mathematical frameworks established years ago. To truly understand search, we must look at the documents that defined the intent of the engineers long before the hardware could keep up with their vision. We cannot discuss patent research without honoring the legacy of the late Bill Slawski. For two decades, Slawski served as the SEO industry’s premier archaeologist. While the rest of the community was debating keyword density and backlink quantities, Slawski was dissecting dry, technical filings to predict the exact state of search we find ourselves in today. His work at SEO by the Sea proved that search engines provide a roadmap of their intentions years before those intentions become reality. Agent Rank (2007): The precursor to E-E-A-T Slawski analyzed the concept of “Agent Rank” nearly 20 years ago. This patent described a system of digital signatures that would connect content to specific authors, assigning them reputation scores based on the quality and reception of their work. At the time, the SEO community largely ignored it because the technology to implement it globally didn’t seem to exist. Fast forward to today, and we refer to this concept as E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness). Google didn’t just invent these guidelines recently; they finally acquired the processing power and the machine learning sophistication to run the numbers on author reputation. The “Agent” is the “E” and the “A,” and the patent was the blueprint. The Fact Repository (2006): The birth of answer engines Long before the Google Knowledge Graph became a household name in marketing, Slawski identified patents for a “Browseable Fact Repository.” This 2006 filing described a system for extracting facts from the web and storing them in a structured way that a machine could easily navigate. This logic is the primary engine behind modern “answer engines.” When an AI provides a direct answer, it isn’t “thinking” in the human sense; it is querying a repository of facts anchored by the principles laid out in the mid-2000s. The algorithm isn’t magic; it is mathematics applied to historical blueprints. If you want to understand why a new feature appears today, look at the filings from 2007 to 2016. That is where the engineering rules were established. Strategy vs. Mechanics: Moving from strings to verified things In the modern SEO landscape, it is easy to get buried under a mountain of buzzwords. To stay focused, it is helpful to categorize your efforts into two buckets: strategy and mechanics. The most significant shift we have seen in recent years is the move from “strings” to “things,” but in 2026, the baseline has shifted again. We have moved from simple entities (things) to verified entities (verified things). An entity—whether it is a person, a brand, or a concept—is essentially worthless in the eyes of an AI if the system cannot prove it is real. We can use a construction metaphor to understand this hierarchy: Semantic SEO is the architecture This is the vision for your digital presence. Semantic SEO is about ensuring the meaning of your content aligns with the user’s intent. It involves mapping out topics and ensuring that the context of your site provides a comprehensive answer to a user’s underlying questions. Entity SEO is the bricklaying Entities are the building blocks. By using distinct nouns and structured data, you build a site that a machine can parse. You are moving away from ambiguous keywords and toward specific, identifiable concepts that exist in the search engine’s knowledge base. Verification is the mortgage This is the step most SEOs currently overlook. Verification is about turning entities into findable, provable facts that are connected to a verified human or organization. If your content isn’t connected to a provable expert, it is viewed as “noise.” In an era where AI can generate infinite content, the only way for a search engine to maintain quality is to prioritize content that is anchored to a verifiable source. AEO vs. GEO: Understanding the nuance of AI search The industry often uses the terms Answer Engine Optimization (AEO) and Generative Engine Optimization (GEO) interchangeably, but they are fundamentally different. They require different content structures, serve different user needs, and are rooted in different technological approaches. Answer Engine Optimization (AEO) AEO is designed for the “direct answer.” This is the realm of voice assistants like Siri and Alexa, or the single, definitive snippet at the top of a search result. It is a binary system. The search engine is looking for a specific fact to fulfill a specific query. To succeed in AEO, you need “confidence anchors.” These are unnuanced, structured facts. Because the engine is “fetching” rather than “synthesizing,” it needs high-confidence data. If your

Uncategorized

You’re Not Scaling Content. You’re Scaling Disappointment

The Illusion of Growth in the Age of Mass Production In the current digital landscape, the pressure to produce content at an industrial scale has never been higher. Marketing departments and SEO agencies often find themselves locked in a relentless arms race, fueled by the belief that a higher volume of pages inevitably leads to a larger share of the market. This philosophy, often referred to as the “volume playbook,” suggests that if you can dominate a keyword set by sheer mass, you can force your way into search engine dominance. However, as industry veterans like Pedro Dias have pointed out, this strategy is frequently a house of cards. The reality is that many organizations are not actually scaling their influence, their brand, or their revenue. Instead, they are scaling disappointment. They are investing thousands of hours and significant capital into a content engine that produces diminishing returns, creates technical debt, and ultimately alienates the very audience it was intended to capture. To understand why the “publish more pages” strategy so often results in failure, we must examine the fundamental disconnect between search engine algorithms and the industrialization of content creation. The Recurring Cycle of the Volume Playbook The history of search engine optimization is littered with the remains of content strategies that prioritized quantity over quality. From the early days of keyword stuffing and link farms to the mid-2010s era of content “mills,” the cycle remains remarkably consistent. It begins with a loophole or an observation that certain types of thin content are ranking well. This leads to a frantic rush to replicate that success at scale. Initially, the results may look promising. A surge in indexed pages often leads to a temporary spike in impressions and clicks. Stakeholders celebrate the “hockey stick” growth on their analytics dashboards. However, this success is almost always short-lived. Google and other search engines are designed to provide the best possible answer to a user’s query. When a site begins to flood the index with low-value, repetitive, or derivative content, it triggers a series of algorithmic checks designed to maintain the integrity of the search results. Eventually, an update occurs—be it a core update or a specific helpful content adjustment—and the site’s traffic collapses. The disappointment sets in, followed by a period of panic, a pivot to a “new” strategy that is often just a variation of the old one, and the cycle begins anew. This cycle persists because it is easier to measure “number of articles published” than it is to measure “true audience value.” The AI Catalyst: Accelerating the Race to the Bottom The advent of generative AI has acted as an accelerant for this cycle of disappointment. Tools that can generate thousands of words in seconds have lowered the barrier to entry for content production to near zero. While AI is a powerful tool for research and structural assistance, its misuse has led to a “gray goo” of content—vast expanses of text that are grammatically correct but fundamentally empty of new insights, unique perspectives, or genuine expertise. When organizations use AI to scale content without human oversight or editorial standards, they are effectively automating their own irrelevance. Search engines have become increasingly sophisticated at identifying “LLM-style” writing that lacks the E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness) required for high rankings. By using AI to simply rephrase existing information found on the web, brands are contributing to a recursive loop of content that offers no incremental value to the user. This is not scaling content; it is scaling noise. The Danger of Information Dilution One of the most significant risks of mass-producing SEO content is the dilution of a website’s overall authority. Every page on a website carries a certain weight in the eyes of a search engine. When a site is bloated with thousands of thin, low-performing pages, it creates “index bloat.” This forces search engine crawlers to waste their “crawl budget” on low-quality pages rather than discovering and indexing the truly valuable insights hidden within the site. Furthermore, internal link structures become muddled. When you have twenty different articles targeting slightly different variations of the same keyword, you are effectively competing against yourself. This internal cannibalization confuses search engines and makes it difficult for them to determine which page is the definitive authority on a topic. Instead of having one powerhouse page that ranks in the top three results, you end up with twenty pages languishing on page five of the search results. Understanding the Difference Between Scale and Growth True scaling in content marketing involves increasing the impact of your message without a linear increase in resources or a decrease in quality. Growth, on the other hand, should be measured by the depth of engagement and the conversion of readers into loyal advocates. The “publish more” playbook confuses activity with progress. Consider the following distinctions between scaling disappointment and scaling value: Scaling Disappointment: Focuses on output metrics (number of posts, word counts, keyword density). Scaling Value: Focuses on outcome metrics (time on page, return visitor rate, assisted conversions, brand sentiment). Scaling Disappointment: Rehashes existing top-ranking content to “match” what is already there. Scaling Value: Introduces original research, case studies, and contrarian viewpoints that add to the conversation. Scaling Disappointment: Relies on automated templates and generic AI prompts. Scaling Value: Leverages Subject Matter Experts (SMEs) to provide depth that AI cannot replicate. The Psychological Trap of the “More is Better” Mindset Why do experienced marketers continue to fall for the volume trap? Much of it is rooted in corporate psychology. In many organizations, SEO is treated as a commodity rather than a strategic asset. Executives often want to see tangible evidence of work, and a spreadsheet showing 500 new URLs is a more “tangible” deliverable than a report explaining why three high-quality white papers took three months to produce. This creates a misaligned incentive structure. Agencies are incentivized to bill for “deliverables,” and internal teams are incentivized to meet “content quotas.” Neither of these incentives is tied to

Uncategorized

Multi-location SEO strategy: Stop competing with your own content

Multi-location SEO strategy: Stop competing with your own content In the digital marketing landscape, multi-location brands often operate under a dangerous assumption: that more content across more pages automatically translates to higher search engine rankings. While this “carpet-bombing” approach to content might seem like a logical way to capture local markets, it frequently results in a phenomenon known as internal competition. Instead of outranking their competitors, many large-scale franchises and businesses with multiple branches find themselves inadvertently battling their own web pages for dominance in the Search Engine Results Pages (SERPs). Investing heavily in content is a hallmark of a healthy SEO budget, but without a unified strategy, that investment can actually dilute your brand’s authority. When every individual location page or local blog covers the exact same topics with the same keywords and search intent, search engines like Google struggle to determine which page is the most relevant. The result? A fragmented digital presence where authority is spread too thin, crawl budgets are wasted, and potential customers are left confused. To win in 2026 and beyond, brands must move away from repetitive volume and toward a sophisticated, tiered content strategy that distinguishes between corporate authority and local relevance. Where the strategy breaks down The breakdown of a multi-location SEO strategy is rarely a deliberate choice. More often, it is a byproduct of rapid scaling or a lack of centralized marketing governance. In many organizations, there is a natural tension between the corporate marketing team and local franchisees or branch managers. Corporate teams are focused on the “big picture”—building national brand awareness and high-level domain authority. Conversely, local teams are boots-on-the-ground; they want content that addresses their specific community’s needs and keeps users on their specific sub-pages. When these two forces act independently, the “too many cooks in the kitchen” syndrome takes over. Local branches may start their own blogs to capture local search intent, often mimicking the exact educational topics already covered on the main corporate site. Without a clear framework for who “owns” specific keywords, the website begins to cannibalize itself. Instead of having one authoritative page about “How to maintain an HVAC system” that ranks nationally and funnels users to local branches, a company might end up with 50 mediocre pages on the same topic, none of which have enough link equity or unique value to rank on page one. What type of content belongs at corporate The key to a successful multi-location strategy lies in the division of labor. Corporate content should act as the “North Star” for the brand, housing the comprehensive, evergreen, and educational resources that establish the organization as a leader in its industry. This centralization is essential for building domain-wide authority and ensuring that search engines view the brand as a primary source of information. Educational and informational pillars If a user is searching for “benefits of routine dental cleanings” or “how to choose the right homeowner’s insurance,” they are looking for information that remains consistent regardless of their geographic location. These broad, informational queries should be owned by the corporate blog. By consolidating this content into a single, high-quality URL, the brand can aggregate all its backlink power and social signals onto one page. This makes it much easier to rank for competitive, high-volume keywords than if that authority were split across dozens of local subfolders. Core service and product descriptions While local branches provide the service, the definition of that service usually comes from the top. Core product pages and service lines should be centralized. This ensures brand consistency and prevents the creation of near-duplicate pages that offer no unique local value. While a location page can—and should—link to these core service pages, they do not need to rewrite the entire technical specification of a product for every city in which they operate. Brand identity and mission Content regarding the company’s history, its leadership team, mission statements, and core values should live at the corporate level. These are the trust signals that reinforce E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness). Standardizing this information across the organization ensures that the brand’s message is never diluted or misrepresented at the local level. What type of content belongs at the local level If corporate owns the “What” and the “Why,” the local level must own the “Where” and the “Who.” Local content is about relevance, conversion, and community connection. This is where the brand proves it isn’t just a faceless national entity, but a local partner that understands the specific needs of its customers in a particular city or region. Geo-specific landing pages Every location needs a dedicated landing page that is more than just a placeholder with an address and phone number. To stand out, these pages require unique copy that reflects the local market. This includes localized metadata (Title tags and Meta descriptions that include the city name) and relevant structured data. Using LocalBusiness schema, including reviews and geo-coordinates, helps Google’s AI understand exactly where the business operates and how it relates to local search queries. Building unique local value To avoid being flagged as duplicate content, location pages should focus on elements that are truly unique to that branch. These include: Local Reviews and Testimonials: Displaying reviews from customers in that specific city provides social proof that resonates with local searchers. Team Bios and Photos: Introducing the actual staff members at a specific location builds immediate trust and differentiates the branch from a generic corporate entity. Community Involvement: Content about local event sponsorships, charity partnerships, or awards won in that specific region adds a layer of authenticity that cannot be replicated at the corporate level. Location-Specific Imagery: High-quality photos of the actual storefront, the local team, and the surrounding area help users and search engines confirm the location’s legitimacy. Whether these elements live on a single robust location page or within a “microsite” structure (where each location has its own subfolder and nested pages), the goal remains the same: strengthen local relevance to drive conversions. Common SEO risks of a

Uncategorized

Your SEO maturity score doesn’t measure what you think it does

Understanding the True Nature of SEO Maturity In the high-stakes world of digital marketing, we often fixate on metrics that provide immediate gratification. We track keyword rankings, organic traffic growth, and backlink profiles with religious fervor. However, when an organization decides to measure its “SEO maturity,” there is a common and dangerous misconception about what that score actually represents. Many stakeholders believe a high maturity score is a reflection of technical perfection or content volume. In reality, your SEO maturity score doesn’t measure what you think it does. Most SEO programs operate in a state of precarious success. They rely on the brilliance of a few individuals rather than the strength of the organization’s infrastructure. The Visibility Governance Maturity Model (VGMM) was designed to address this specific gap. It isn’t an audit of your H1 tags or your site speed; it is an assessment of clear ownership, documented processes, and the decision rights that prevent your hard work from being accidentally dismantled by other departments. If your SEO strategy relies on a “hero” to save the day whenever an algorithm update hits, your organization isn’t mature—it’s lucky. True maturity is about sustainability, and the VGMM is the diagnostic tool that reveals whether your success is built on a foundation of granite or a house of cards. What VGMM Questions Are Designed to Reveal To understand the score, you must first understand the questions. A VGMM assessment doesn’t ask practitioners if they know how to optimize a page. Instead, these questions are directed at managers and the C-suite—the individuals who are responsible for the resources and governance of the brand’s digital presence. This is a critical distinction. The SEO practitioner usually knows exactly what needs to be done. They understand the nuances of schema markup, the importance of internal linking, and the complexities of crawl budget. But the VGMM isn’t testing individual knowledge; it is testing institutional knowledge. It diagnoses organizations where SEO expertise lives exclusively in the heads of employees rather than in documented, governed processes. If an organization’s SEO strategy walks out the door when a senior manager takes a new job, that organization has a maturity problem. Governance gaps typically manifest in the responses of management. When senior leaders are asked about the SEO process, the warning signs are often phrases like: “I don’t actually know the answer to that.” “You’d have to ask Sarah; she handles all the technical stuff.” “We had a process for that last year, but I’m not sure if anyone is still following it.” “Every regional team handles their own optimization differently.” “I think that documentation exists somewhere in the shared drive, but I haven’t seen it.” When leadership cannot answer basic questions about governance, it is a clear signal that SEO processes are not institutionalized. The organization is operating in a reactive state, vulnerable to personnel changes and departmental silos. The SPOF Reality Check: Why You Might Be a Liability One of the most sobering aspects of the VGMM is the identification of a Single Point of Failure (SPOF). In many organizations, the most talented SEO practitioner is also the company’s greatest risk. If you are the person who knows where all the “bodies are buried”—the one who understands the weird redirects from 2018, the logic behind the canonical tags, and exactly what will break if the dev team pushes a specific update—you are a SPOF. While this might feel like ultimate job security, it is actually what governance experts call a “job prison.” You cannot take a vacation without checking your email. You cannot be promoted without leaving a vacuum that could collapse the department. More importantly, from a maturity standpoint, a SPOF acts as a hard ceiling. An organization cannot move past Level 2 maturity as long as a Single Point of Failure exists. When the VGMM identifies you as an SPOF, it changes the conversation with leadership. Instead of you begging for more help, the data shows leadership that the current setup is a business risk. This realization leads to several positive outcomes: Resource Allocation: Leadership realizes that your knowledge must be codified into documentation. Training Budgets: Approval is granted to train others, spreading the expertise across the team. Institutional Continuity: Your expertise becomes a part of the company’s intellectual property, not just a personal skill set. Better Work-Life Balance: You can finally step away from the office knowing that the systems you built are governed by process, not just your presence. How Domain Scores Become a VGMM Score The VGMM is not a single, monolithic test. It is composed of various domain models, such as the SEO Governance Maturity Model (SEOGMM), Content Governance Maturity Model (CGMM), and Website Performance Maturity Model (WPMM). Each of these contributes to a holistic view of the company’s digital health. The process of arriving at a final score involves five distinct steps. Step 1: Domain Assessment Each domain utilizes a bank of 30 to 60 governance questions. These are strictly behavior-based. An opinion-based question might ask, “Do you think SEO is important for our growth?” (To which everyone says yes). A behavior-based question asks, “Are the SEO standards for new product launches documented and signed off by the Product Lead?” (A question that requires proof of a process). Step 2: Weighted Scoring Not all governance failures carry the same weight. A minor documentation gap in a low-traffic section of the site is weighted differently than a lack of ownership over critical technical decisions. The system identifies which gaps have the highest potential for catastrophic failure and weighs the score accordingly. Step 3: The SPOF Constraint This is the “fail-safe” of the maturity model. If a Single Point of Failure is detected, the domain score is automatically capped at Level 2 (Emerging). It does not matter how sophisticated your tools are or how high your traffic is; if the system relies on one person, it is not “Structured” (Level 3). Step 4: Domain Aggregation Individual domain scores are then averaged into an overall

Scroll to Top