Uncategorized

Uncategorized

Google launches non-skippable Video Reach campaigns for connected TV

The landscape of digital advertising is undergoing a seismic shift as the traditional television screen transforms into a data-driven, programmatic powerhouse. In a major move to capitalize on the growing dominance of streaming, Google has officially launched Video Reach Campaign (VRC) Non-Skip ads for connected TV (CTV). This update, now globally available in Google Ads and Display & Video 360, marks a significant milestone for brands looking to capture the undivided attention of viewers in the “living room” environment. For years, YouTube has occupied a unique space between social media and traditional broadcasting. However, its recent performance metrics suggest it has firmly secured its place as the modern version of the television network. According to Nielsen, YouTube has remained the number one streaming platform in the United States for three consecutive years. With millions of households ditching cable in favor of smart TVs and streaming devices, Google is positioning itself to be the primary gateway for advertisers to reach these high-value audiences through non-skippable, high-impact video formats. The Evolution of Video Reach Campaigns Video Reach Campaigns were originally designed to simplify the process of buying video ads across YouTube’s vast ecosystem. By allowing advertisers to select their primary goal—such as efficient reach or frequency—Google’s algorithms would distribute ads across different formats to achieve the best results. The introduction of the “Non-Skip” variant specifically for CTV is a direct response to advertiser demand for guaranteed message delivery on the largest screen in the home. Historically, digital video advertising relied heavily on skippable formats, where viewers could bypass an ad after five seconds. While this is effective for engagement and ensuring brands only pay for interested viewers, it often poses a challenge for storytelling. Complex brand messages frequently require more than five seconds to resonate. By bringing non-skippable inventory to the forefront of CTV strategies, Google is giving advertisers the assurance that their full creative vision will be seen by the target audience. How Google AI Optimizes Non-Skippable Reach One of the standout features of this launch is the integration of Google AI to manage the campaign mix. In the past, media buyers had to manually allocate budgets between different ad lengths, such as 6-second bumpers or 15-second spots. This manual process often led to inefficiencies, as it was difficult to predict which format would perform best at any given moment across diverse audience segments. With VRC Non-Skip campaigns, Google AI takes the wheel. The system dynamically optimizes across three primary formats: 1. Six-Second Bumper Ads Bumper ads are brief, punchy, and designed for maximum reach and frequency. They are ideal for reinforcing brand awareness and staying top-of-mind without disrupting the viewer’s experience for too long. In a CTV context, these serve as effective “reminders” for audiences who may have already seen longer-form content from the brand. 2. Fifteen-Second Standard Spots The 15-second spot is the industry standard for video advertising. It provides enough time to establish a narrative, showcase a product, and deliver a clear call to action. Within VRC Non-Skip, these 15-second ads ensure that the core of the brand message is delivered in its entirety, providing a balance between brevity and storytelling. 3. Thirty-Second CTV-Only Non-Skippable Formats Exclusive to the connected TV experience, the 30-second non-skippable format is designed for high-impact storytelling. Because the CTV environment is inherently a “lean-back” experience—where viewers are typically settled on a couch and less likely to be multitasking with a mouse or keyboard—longer ads are more acceptable and often lead to higher brand lift. Google AI prioritizes these longer slots when it determines they will have the greatest impact on campaign goals. Why the Connected TV (CTV) Market is Critical The shift toward CTV is not just a trend; it is a fundamental change in how media is consumed. Connected TV refers to any television set used to stream video over the internet. This includes smart TVs, gaming consoles like the PlayStation 5 or Xbox Series X, and streaming sticks like Roku, Amazon Fire TV, or Google TV. For advertisers, CTV represents the “best of both worlds.” It offers the premium, full-screen, high-definition experience of traditional linear television combined with the precision targeting and measurement capabilities of digital advertising. Here are a few reasons why this environment is so valuable: High View-Through Rates Unlike mobile devices or desktop computers, where users are often prone to clicking away, scrolling, or switching tabs, CTV viewers are generally more focused. Non-skippable ads in this environment boast incredibly high completion rates, ensuring that the advertiser’s investment results in a fully delivered message. The “Co-Viewing” Effect One of the unique aspects of CTV is co-viewing. While a mobile ad is typically seen by one person, a CTV ad is often viewed by multiple people simultaneously—families watching a movie, friends watching a sports game, or couples catching up on a series. This effectively lowers the cost-per-impression (CPM) when considering the total number of “eyeballs” on the screen. Premium Content Environment YouTube on the TV screen is often associated with high-quality, long-form content. Whether it is a documentary, a high-budget gaming stream, or a music video, placing non-skippable ads within this content allows brands to align themselves with premium entertainment, boosting brand perception. Strategic Implications for Brands and Media Buyers The general availability of VRC Non-Skip campaigns simplifies the workflow for digital marketers and agencies. By moving away from manual format splitting, advertisers can focus more on creative strategy and high-level audience targeting rather than the minutiae of budget allocation. For brands with specific reach goals—such as launching a new product or driving awareness for a seasonal event—the ability to guarantee full-message delivery is a game-changer. It eliminates the “creative anxiety” of wondering if the most important part of the ad was skipped by the viewer. Instead, the focus shifts to ensuring the 15 or 30 seconds of content are as engaging and persuasive as possible. Furthermore, because these campaigns are managed through Google Ads and Display & Video 360, advertisers can leverage the full suite of Google’s first-party

Uncategorized

Google AI Overviews Surges Across 9 Industries via @sejournal, @martinibuster

The Evolution of Search: Understanding the Rise of Google AI Overviews The landscape of digital search is undergoing its most significant transformation since the introduction of the mobile-first index. Google AI Overviews, formerly known as the Search Generative Experience (SGE), has officially moved from a limited experimental phase into a dominant feature of the global search results pages. Recent data indicates a massive surge in the visibility of these AI-generated summaries, particularly across nine key industries where the impact on organic traffic and user behavior is becoming impossible to ignore. As Google continues to integrate its Gemini large language models into its core search product, the frequency of AI-triggered results has reached a critical tipping point. Current metrics suggest that nearly 50% of all search queries now trigger an AI Overview. This shift represents a fundamental change in how information is synthesized and presented to the user, moving away from a list of links toward a comprehensive, conversational answer. For SEO professionals, content creators, and business owners, this surge is not just a technical update; it is a paradigm shift. Understanding where AI Overviews are appearing, why they are appearing, and how to maintain visibility in this new environment is essential for anyone relying on search engine traffic. The Scaling of AI Search: Breaking Down the 50% Threshold The rapid expansion of AI Overviews is part of Google’s strategy to maintain its dominance in an era where users are increasingly turning to AI chatbots like ChatGPT and Perplexity for information. By providing direct answers within the search interface, Google aims to reduce the friction of the user journey. The fact that nearly half of all search queries now feature an AI Overview suggests that Google has refined its confidence levels. In the early stages of the rollout, AI summaries were often relegated to low-stakes informational queries. However, the technology has matured to the point where it now handles complex, multi-layered questions that previously required several clicks into different websites to answer. This expansion has been particularly noticeable in “how-to” queries, long-tail informational searches, and comparison-based shopping queries. When a search engine provides a definitive summary at the top of the page, the traditional “blue links” are pushed further down the fold, fundamentally altering the click-through rate (CTR) dynamics for the top organic positions. The 9 Industries Experiencing the Greatest Impact While the surge is visible across the board, nine specific industries have seen a disproportionate increase in AI Overview visibility. These sectors represent areas where users are seeking synthesized information, comparisons, or structured advice. 1. Healthcare and Medical Information Despite the sensitivities surrounding “Your Money or Your Life” (YMYL) content, the healthcare industry has seen a massive influx of AI Overviews. Google is using its AI to summarize symptoms, explain medical procedures, and provide general health wellness advice. While Google still includes disclaimers and citations to authoritative sources like the Mayo Clinic or WebMD, the AI Overview often provides enough information that a user may not feel the need to click through to a full article. 2. Financial Services and Personal Finance From explaining complex mortgage terms to comparing credit card benefits, the finance industry is heavily saturated with AI-generated responses. Users looking for quick financial definitions or comparisons are finding the AI Overview to be a highly efficient tool. This puts a premium on financial institutions to ensure their proprietary data and expert insights are being cited within these summaries. 3. E-commerce and Retail Retail has perhaps undergone the most sophisticated transformation. AI Overviews in the e-commerce space go beyond mere summaries; they act as shopping assistants. They aggregate reviews, highlight pros and cons of specific products, and offer buying guides directly within the search result. For retailers, the challenge is no longer just ranking for a product keyword, but ensuring their product data is structured in a way that the AI can accurately represent it in a summary. 4. B2B Technology and SaaS The technology sector, particularly Software as a Service (SaaS), is seeing a surge in AI Overviews for queries related to software comparisons, implementation guides, and technical troubleshooting. AI is adept at pulling information from documentation and forum threads to provide a singular cohesive answer, which can impact the traffic flow to tech blogs and help centers. 5. Travel and Hospitality The travel industry is a natural fit for AI synthesis. Queries regarding “the best time to visit Tokyo” or “top 10 things to do in Paris” are now routinely answered by AI Overviews. These summaries pull from across the web to create mini-itineraries, often bypassing traditional travel blogs and review sites that used to dominate these queries. 6. Education and Academic Resources Students and lifelong learners are increasingly using Google to explain complex concepts, from mathematical theories to historical events. AI Overviews are proving highly effective at breaking down these topics into digestible bullet points, which has led to a surge in visibility across educational and academic search terms. 7. Professional Services (Legal and Consulting) Like the finance and health sectors, professional services are seeing AI Overviews tackle informational “top of the funnel” queries. Questions about legal definitions or business strategies are being summarized by AI, often pulling from law firm blogs and consultancy white papers. 8. Real Estate The real estate industry is seeing growth in AI Overviews for queries regarding neighborhood guides, market trends, and home-buying processes. Instead of clicking on a real estate portal to read a guide, users are getting the highlights of a specific ZIP code’s market conditions directly from the AI. 9. Lifestyle and Entertainment From movie summaries and “where to watch” queries to cooking tips and lifestyle advice, this broad category has seen some of the highest densities of AI-triggered results. The conversational nature of Gemini allows it to provide recommendations that feel personalized, further entrenching the AI Overview in daily lifestyle searches. How AI Overviews Change the Search User Experience The primary goal of AI Overviews is to improve user satisfaction by reducing the time it

Uncategorized

Bing Adds GEO To Official Guidelines, Expands AI Abuse Definitions via @sejournal, @MattGSouthern

The Evolution of Search: Understanding Bing’s New Webmaster Guidelines The digital landscape is currently undergoing its most significant transformation since the invention of the hyperlink. As artificial intelligence continues to weave itself into the fabric of the internet, search engines are forced to rewrite the rules of engagement. Microsoft’s Bing has recently taken a monumental step in this direction by overhauling its official Webmaster Guidelines. This update is not merely a routine adjustment; it represents a fundamental shift in how search engines perceive, categorize, and utilize web content in the era of Generative AI. For years, Search Engine Optimization (SEO) was the primary framework for digital visibility. However, with the integration of Microsoft Copilot and other large language models (LLMs) directly into search results, a new discipline has emerged: Generative Engine Optimization (GEO). Bing’s latest guidelines officially recognize this shift, providing webmasters with a clearer roadmap for how to handle AI grounding, meta-directive controls, and the evolving definitions of content abuse. By expanding these definitions, Bing is signaling that the era of “search as a list of links” is officially giving way to “search as a conversational engine.” What is Generative Engine Optimization (GEO)? To understand the depth of Bing’s guideline changes, one must first grasp the concept of Generative Engine Optimization. While traditional SEO focuses on ranking a website within a list of blue links, GEO focuses on ensuring that a website’s information is accurately captured, synthesized, and cited by generative AI models. When a user asks Copilot a question, the AI doesn’t just find a page; it reads multiple pages, understands the context, and generates a cohesive answer. Bing’s decision to add GEO to its official guidelines confirms that “optimizing for AI” is no longer a fringe theory—it is a core requirement for modern digital publishing. GEO involves structuring data in a way that LLMs can easily parse, ensuring that factual claims are clearly supported, and maintaining high topical authority so the AI trusts the source enough to include it in a generated response. The Role of Citations in GEO One of the most critical aspects of GEO discussed in the new guidelines is the importance of citations. Unlike traditional search, where a click is the primary metric, generative search relies on “grounding.” Grounding is the process by which an AI model links its generated text to verifiable data sources. For publishers, being the “grounding source” for a Copilot answer is the new gold standard. Bing’s updated guidelines emphasize that for content to be used in this manner, it must be highly relevant, authoritative, and technically accessible to the BingBot crawler. Copilot Grounding and the Importance of Fact-Based Content The term “grounding” has become a buzzword in the AI space, but its inclusion in Bing’s Webmaster Guidelines gives it a formal regulatory weight. Grounding refers to the practice of providing the AI with a specific set of data to ensure its answers are accurate and not “hallucinated.” When Copilot answers a query, it “grounds” its response in the index of the live web. Bing’s updated guidelines provide specific insights into how webmasters can improve their chances of being used for grounding. This involves more than just keyword density; it requires logical information architecture. Content that follows a clear “Question-and-Answer” format, uses detailed headers, and provides structured data (Schema.org) is much more likely to be utilized by Copilot. The guidelines suggest that the more “fact-dense” a page is, the more useful it becomes for a generative engine seeking to provide a concise summary to a user. Improving Discovery for AI Grounding To be effective in the world of GEO, publishers must ensure their technical foundations are flawless. Bing’s updates highlight that if an AI cannot easily discern the relationship between different pieces of information on a page, it will likely skip that page in favor of a better-structured competitor. This makes the use of semantic HTML and clear, unambiguous language more important than ever. The goal is to reduce the “cognitive load” on the AI as it attempts to summarize your content. New Meta Directive Controls for AI Answers As AI tools began scraping the web to train models and provide real-time answers, many publishers voiced concerns regarding copyright and the potential loss of traffic. If an AI provides a perfect summary of an article, will the user ever click through to the website? To address these concerns, Bing has expanded its meta-directive controls. These controls allow webmasters to dictate exactly how their content is used by Bing’s generative features. The updated guidelines detail how publishers can use specific tags to opt-out of certain AI features without completely removing themselves from the search index. This is a crucial distinction. In the past, the choice was often binary: allow indexing or block it. Now, Bing is introducing more granular “No-AI” style controls. For example, a publisher might want their content to appear in traditional search results but might not want Copilot to use their long-form investigative reporting to generate a 200-word summary that replaces the need for a visit. The Technical Implementation of Directives Webmasters can now use variations of the “NOCACHE” and “NOARCHIVE” tags, along with newer, more specific directives, to signal their preferences to BingBot. By implementing these tags, a site owner can protect their intellectual property while still maintaining a presence in the search ecosystem. This balance is vital for the sustainability of the open web, and Bing’s inclusion of these controls in the official guidelines is a welcome move for the publishing industry. A Softened Stance on AI-Generated Content Perhaps the most controversial topic in digital publishing over the last two years has been the use of AI to create content. Initially, there was a fear that search engines would penalize any content not written by a human. However, Bing’s updated guidelines reflect a more nuanced and “softened” stance on AI-generated content. Bing has clarified that the *origin* of the content is less important than its *utility* and *quality*. This aligns Bing more closely with

Uncategorized

How Researchers Reverse-Engineered LLMs For A Ranking Experiment via @sejournal, @martinibuster

Understanding the Shift from Search Engines to Generative Engines The landscape of digital information retrieval is undergoing its most significant transformation since the inception of the World Wide Web. For decades, Search Engine Optimization (SEO) has been the primary vehicle for visibility, focusing on keywords, backlinks, and technical site health to appease Google’s algorithms. However, the rise of Large Language Models (LLMs) like GPT-4, Claude, and Gemini has introduced a new paradigm: Generative Engine Optimization (GEO). As users increasingly turn to AI chatbots and generative search experiences—such as Perplexity AI or Google’s Search Generative Experience (SGE)—the goal for marketers and developers has shifted. It is no longer enough to rank on the first page of search results; brands now need to be the “chosen” answer generated by an LLM. To understand how to achieve this, researchers have begun reverse-engineering the internal ranking mechanisms of these models, exploring complex methodologies such as Shadow Models and Query-based solutions. These experiments are crucial because LLMs operate as “black boxes.” Unlike traditional search engines that follow relatively predictable (though complex) rules, LLMs generate responses based on probabilistic weights and attention mechanisms. Understanding how to influence these outputs requires a scientific approach to reverse-engineering the logic behind LLM preferences. The Challenge of LLM Ranking Transparency Traditional SEOs are accustomed to having tools like Ahrefs, Semrush, and Google Search Console to provide data on rankings and traffic. In the world of LLMs, this data is largely non-existent. When an LLM recommends a specific product or cites a particular source, it isn’t always clear why that source was prioritized over others. Is it because of the source’s authority, the semantic relevance of the text, or the specific way the query was phrased? Researchers investigating this problem face the challenge of non-determinism. If you ask an LLM the same question twice, you might get two slightly different answers. This variability makes it difficult to pinpoint specific ranking factors. To combat this, researchers have developed frameworks to isolate variables and test how different inputs affect the final output. This is where the concepts of Shadow Models and Query-based solutions come into play. Deep Dive into Shadow Models One of the most sophisticated ways researchers are reverse-engineering LLMs is through the use of Shadow Models. A Shadow Model is essentially a smaller, more transparent model trained or fine-tuned to mimic the behavior of a larger, “target” model (like GPT-4). By observing how the target model responds to thousands of prompts, researchers can create a proxy that behaves similarly but allows for much deeper inspection. The Architecture of a Shadow Model Shadow Models work on the principle of knowledge distillation. Because researchers cannot see the internal weights of a proprietary model, they treat the model as an oracle. They feed the oracle a vast array of queries and record the responses. They then train a secondary model on these input-output pairs. Once the Shadow Model reaches a high level of parity with the original, researchers can analyze the Shadow Model’s decision-making process. This method allows researchers to identify “activation patterns.” For instance, they can see which parts of a prompt trigger the model to prioritize a specific type of information. This insight is invaluable for understanding how an LLM evaluates the “quality” of a piece of content before including it in a generative summary. Advantages of Using Shadow Models The primary advantage of a Shadow Model is control. In a live environment, testing a large-scale LLM is expensive and slow. A Shadow Model can be run locally, allowing for rapid-fire testing of different optimization strategies. Furthermore, Shadow Models help identify “biases” in the original model. If a Shadow Model consistently ranks shorter, more concise answers higher, it likely reflects a preference ingrained in the larger model’s training data. The Role of Query-Based Solutions While Shadow Models focus on replicating the model itself, Query-based solutions focus on the interaction between the user and the model. This approach is more practical for the average SEO professional because it doesn’t require training a secondary AI. Instead, it involves the systematic manipulation of prompts and the retrieved context (often referred to as the “context window”) to see what sticks. Understanding Retrieval-Augmented Generation (RAG) To understand Query-based solutions, one must understand Retrieval-Augmented Generation (RAG). Most modern LLM search experiences don’t rely solely on the model’s pre-trained knowledge. Instead, when a user asks a question, the system searches the web (or a specific database) for relevant documents, feeds those documents into the LLM, and asks the LLM to summarize them. Query-based experiments look at how the LLM decides which part of that retrieved text to emphasize. Researchers test different variables such as: Semantic Density: Does the model prefer text that is packed with facts, or text that flows naturally? Citation Placement: Does placing a brand name at the beginning of a paragraph increase the likelihood of it being mentioned in the AI’s response? Authority Signals: Does the inclusion of expert quotes or statistical data within the retrieved text improve its “ranking” within the LLM’s output? The Effectiveness of Prompt Engineering Query-based solutions also involve “jailbreaking” or probing the model’s instructions. By using specific phrasing, researchers can force the model to reveal its prioritization logic. For example, asking the model to “Compare these three sources and explain why one is better than the others” can provide direct insight into the internal evaluation criteria the LLM is using at that moment. Comparative Analysis: Shadow Models vs. Query-Based Solutions The research suggests that both methods are essential but serve different purposes. Shadow Models are excellent for broad, foundational research. They help us understand the “psychology” of the AI—what it values at a structural level. This is useful for long-term content strategy and understanding the inherent limitations of different LLM architectures. On the other hand, Query-based solutions are more tactical. They are highly effective for “live” optimization. Because LLMs are updated frequently, a Shadow Model can quickly become outdated. Query-based testing allows for real-time adjustments to content to ensure it

Uncategorized

From Visibility Engineering To Preference Engineering: The Rise Of The Infinite Tail via @sejournal, @TaylorDanRW

The Evolution of Search Paradigms: From Visibility to Preference For decades, the core objective of Search Engine Optimization (SEO) was centered around a single concept: visibility. If a brand appeared on the first page of Google, it was successful. This era, which we can define as Visibility Engineering, focused on the mechanics of discovery. It was about ensuring that crawlers could access content, that keywords matched user queries, and that backlink profiles were robust enough to signal authority. However, the landscape of digital discovery is undergoing a seismic shift. The rise of Generative AI, Large Language Models (LLMs), and hyper-personalized search algorithms has introduced a new challenge for marketers. We are moving away from a world where “being seen” is enough, into a world where “being preferred” by the algorithm is the only way to survive. This transition marks the rise of Preference Engineering and the emergence of what experts call the Infinite Tail. To navigate this new reality, professionals must rethink their approach to content, technical structure, and brand authority. The traditional “Long Tail” of search has expanded into an infinite array of hyper-specific, intent-driven permutations. In this environment, broad visibility is becoming less attainable and less valuable than deep-seated preference within specific niches. Understanding Visibility Engineering: The Foundation of Traditional SEO Visibility Engineering represents the traditional toolkit of the SEO industry. It is rooted in the idea that search engines are essentially librarians cataloging a vast index of information. To win at visibility engineering, a site needed to excel at three primary things: accessibility, relevance, and popularity. Accessibility involved technical SEO—sitemaps, robots.txt, site speed, and mobile-friendliness. Relevance was achieved through keyword research and on-page optimization, ensuring that the words on the page mirrored the words in the search bar. Popularity was measured through the currency of the internet: the backlink. High-authority links acted as votes of confidence, pushing a site higher in the rankings. While these elements remain important, they are no longer sufficient. Visibility engineering works in a world of limited results—the “ten blue links” model. But as search engines evolve into answer engines, the goal is no longer just to be one of the ten links; the goal is to be the specific entity that the AI chooses to synthesize into its final response. The Rise of the Infinite Tail The “Long Tail” was a term coined by Chris Anderson to describe the shift from a small number of “hits” at the head of a demand curve to a huge number of niche products in the tail. In SEO, this meant targeting specific, low-volume phrases rather than broad, high-volume keywords. Today, we have entered the era of the Infinite Tail. With the advent of AI-driven search tools like Google’s Search Generative Experience (SGE), Perplexity, and ChatGPT, the number of possible search permutations has become effectively infinite. Users no longer search using simple fragments like “best running shoes.” Instead, they provide complex, multi-layered prompts: “Find me carbon-plated running shoes suitable for a marathon runner with wide feet who prefers a high drop and eco-friendly materials.” This is the Infinite Tail in action. There is no single “keyword” for that query. Instead, the AI must traverse an enormous web of data to find the entities that best match the user’s specific preferences. For a brand to surface in these results, it cannot rely on general visibility. It must be engineered to be the preferred choice for those specific parameters. Transitioning to Preference Engineering Preference Engineering is the practice of optimizing a digital presence so that an AI model or a search algorithm chooses your brand as the definitive answer for a specific user intent. While visibility engineering asks, “How can I be seen?”, preference engineering asks, “Why should I be chosen?” This shift requires a move away from generic content toward high-fidelity, entity-based information. AI models do not just look for keywords; they look for relationships between entities. If a user asks for an “eco-friendly marathon shoe,” the AI looks for brands that have a strong, verified association with “eco-friendly materials,” “marathon performance,” and “durability.” Preference engineering is about building these associations so strongly that the algorithm views your brand as the most “probable” correct answer. This involves a much narrower focus than traditional SEO. You cannot be the preferred answer for everything, so you must choose the specific intersections where your expertise is undeniable. The Critical Role of Entity Signals In the world of Preference Engineering, the “Entity” is the new keyword. An entity is a well-defined object or concept—a person, a place, a brand, or a specific product. Search engines now use Knowledge Graphs to understand how these entities relate to one another. To signal to an AI that your brand is the preferred entity, you must provide clear, structured data and consistent signals across the web. This includes: Schema Markup: Using advanced JSON-LD to define your products, organization, and expertise in a language that machines can parse easily. Knowledge Graph Presence: Ensuring your brand is cited in authoritative databases like Wikidata, Wikipedia, and niche-specific industry directories. Consistent Citations: Maintaining a consistent name, address, and profile across all digital touchpoints to reinforce the identity of the entity. When an AI model calculates a response, it weighs the “strength” of an entity. If your brand has weak entity signals, the AI will likely skip over you in favor of a brand with a more established digital footprint, even if your content is technically “optimized” for keywords. Deep Topical Coverage: Quality Over Quantity The Infinite Tail demands a level of topical depth that traditional SEO often ignored. In the past, marketers might create a “hub and spoke” model to cover a topic broadly. In the era of preference, this isn’t enough. You need to demonstrate “Topical Authority” through exhaustive, high-value coverage of a niche. Deep topical coverage means answering the questions that haven’t been asked yet. It involves exploring the nuances, the edge cases, and the technical specifics of your field. For example, if you are an

Uncategorized

Google expands recurring billing policy

Understanding the Shift in Google Ads for Healthcare The landscape of digital advertising for the healthcare and pharmaceutical sectors has always been one of the most strictly regulated environments on the internet. For years, Google has maintained a cautious stance, balancing the need for commercial growth with the imperative of user safety and legal compliance. Recently, Google has taken a significant step forward by expanding its recurring billing policy. This change specifically targets certified U.S. online pharmacies, allowing them to promote prescription drugs through subscription models and bundled services. This update is more than a simple technical adjustment; it represents a major pivot in how healthcare services are marketed and consumed in the digital age. By allowing recurring billing for medications and related consultations, Google is acknowledging the rise of telehealth and the growing consumer demand for convenient, long-term healthcare solutions. For digital marketers, SEO specialists, and pharmacy owners, understanding the nuances of this policy expansion is critical for maintaining compliance while maximizing reach. What the Policy Expansion Covers The expansion of the recurring billing policy is structured around three primary pillars. These updates allow certified merchants to offer a more holistic and modern purchasing experience for patients who require ongoing medication management. 1. Prescription Drug Subscriptions The most direct change is the allowance of recurring billing for prescription medications. Previously, the hurdles for setting up recurring payments for controlled or regulated substances were significant. Under the new guidelines, certified U.S. online pharmacies can now set up subscription models where patients are billed automatically at regular intervals for their medication refills. This mirrors the subscription boxes and “subscribe and save” models seen in general e-commerce, but with the added layers of pharmaceutical oversight. 2. Prescription Drug Bundles Google is now permitting the bundling of prescription drugs with supplementary services. These bundles can include coaching, specific treatment programs, or wellness monitoring. The core requirement here is that the prescription drug must remain the primary product in the bundle. This allows pharmacies to transition from being simple pill-dispensers to becoming comprehensive healthcare providers that support a patient’s entire journey, such as weight loss programs or chronic condition management that requires both medication and behavioral coaching. 3. Prescription Drug Consultation Services Determining eligibility for a prescription often requires a professional consultation. Google’s updated policy now allows for recurring billing of these consultation services. These can be offered as standalone subscription services or bundled directly with the medications themselves. This is a massive boon for the telehealth industry, where ongoing access to a healthcare professional is often a prerequisite for continued medication access. The Path to Eligibility: Certification and Compliance While this policy expansion opens new doors, it is not a free-for-all. Google has maintained its high standards for who can participate in these advertising opportunities. To take advantage of these changes, merchants must meet several rigorous requirements. Maintaining Certified Status First and foremost, the merchant must be a certified U.S. online pharmacy. This usually involves third-party verification from organizations recognized by Google, such as the National Association of Boards of Pharmacy (NABP) or LegitScript. Without this certification, the doors to recurring billing for pharmaceuticals remain firmly shut. This ensures that only legitimate, licensed entities are reaching consumers, protecting users from the risks associated with rogue online pharmacies. The Technical Requirement: [subscription_cost] Attribute From a technical standpoint, Google Merchant Center users must implement the [subscription_cost] attribute in their product feeds. This attribute is designed to provide transparency to Google’s systems and, ultimately, to the end consumer. It requires the merchant to clearly define the period of the subscription (monthly, quarterly, etc.) and the cost per period. Accurate data feed management is essential here; any discrepancy between the feed data and the landing page can lead to account suspension. Transparency on Landing Pages Google’s policy on recurring billing has always prioritized the consumer’s right to know exactly what they are signing up for. Landing pages must clearly display all terms and conditions. This includes the total cost, the frequency of billing, and, perhaps most importantly, how to cancel the subscription. Hidden fees or “dark patterns” that make it difficult for a user to opt out of a recurring charge are strictly prohibited and will result in rapid disapproval of ads. Why This Matters for the Digital Marketing Landscape For years, online pharmacies have struggled with the limitations of “one-off” sales models in an industry that naturally lends itself to long-term relationships. The ability to market subscriptions officially on Google changes the math for Customer Acquisition Cost (CAC) and Lifetime Value (LTV). Predictable Revenue Streams The subscription model is the holy grail of modern business because it creates predictable, recurring revenue. Online pharmacies can now forecast their inventory needs and revenue growth with much greater accuracy. This stability allows for more aggressive reinvestment into SEO and PPC campaigns, creating a virtuous cycle of growth. Enhanced Patient Retention Medication adherence is a major challenge in healthcare. By offering subscription models, pharmacies make it easier for patients to stay on their prescribed regimens without the friction of remembering to reorder every month. For marketers, this means the focus shifts from constant acquisition to retention and brand loyalty. If a patient is signed up for a monthly bundle that includes coaching and their medication, they are much less likely to switch to a competitor. Competitive Edge in Telehealth The telehealth market is incredibly crowded. Startups and established healthcare providers are all vying for the same “digital-first” patient. By leveraging Google’s new policy, pharmacies can offer more competitive and integrated packages. A pharmacy that offers a “Weight Loss Subscription” featuring medication, a monthly doctor check-in, and a digital coaching app will likely outperform a competitor that only sells the medication on a per-bottle basis. Strategic Implementation for SEO and Merchant Center To succeed under these new rules, businesses need to align their SEO and technical marketing strategies. It isn’t enough to simply flip a switch; you need to ensure that your site’s infrastructure and content are optimized for the

Uncategorized

Google uses both schema.org markup and og:image meta tag for thumbnails in Google Search and Discover

The Evolution of Visual Search: A New Standard for Thumbnails In the rapidly changing landscape of digital marketing, the visual representation of content has moved from a secondary concern to a primary driver of engagement. Google’s latest update to its Image SEO best practices and Google Discover documentation marks a significant shift in how webmasters must approach image optimization. By explicitly stating that the search engine utilizes both schema.org markup and og:image meta tags to determine thumbnails, Google has provided a clearer roadmap for site owners looking to dominate both the Search Engine Results Pages (SERPs) and the highly lucrative Discover feed. For years, SEO professionals debated whether Google prioritized structured data over social meta tags when selecting the “hero” image for a search result or a Discover card. This ambiguity often led to inconsistent results, where a carefully chosen featured image might be ignored in favor of a secondary, less relevant graphic. The recent clarification removes this guesswork, confirming that a multi-layered approach to metadata is the most effective way to influence Google’s automated selection process. Understanding the Core Update: What Google Changed Google recently revised two critical pieces of documentation: the “Image SEO best practices” guide and the “Google Discover” help document. The core of this update is the addition of a section titled “Specify a preferred image with metadata.” Within this section, Google acknowledges that while its selection of an image preview is completely automated, it draws from a variety of sources to decide which visual best represents a page. This automation uses advanced computer vision and machine learning algorithms to scan a page, but metadata serves as the essential “hint” that guides these algorithms. By providing specific signals through schema.org and Open Graph tags, publishers can effectively tell Google: “This is the most important image on this page.” This is particularly vital for text-heavy results where an image thumbnail can significantly increase the click-through rate (CTR) by making the result more eye-catching. The Role of Schema.org in Thumbnail Selection Schema.org is a collaborative, community-driven project aimed at creating a common set of schemas for structured data on the internet. For Google, structured data is the gold standard for understanding the context of a webpage. In the context of images, Google has highlighted three specific properties that influence thumbnail selection: The primaryImageOfPage property is perhaps the most direct signal you can send. By specifying this property with a URL or an ImageObject, you are explicitly labeling the image as the representative visual for that specific URL. This is especially useful for landing pages, portfolio items, or long-form articles where multiple images may exist, but one stands out as the definitive visual anchor. Alternatively, Google suggests using the mainEntity or mainEntityOfPage properties. These properties are used to describe the primary topic of the page. For example, if you have a product review page, the “mainEntity” is the product itself. By attaching an image URL or ImageObject to this main entity, you tell Google that the image is not just a decorative element but is intrinsically linked to the subject matter of the page. This increases the likelihood of that image appearing in product-rich snippets or specialized search layouts. The Power of og:image Meta Tags The og:image tag is part of the Open Graph Protocol, originally developed by Facebook to allow web pages to become rich objects in a social graph. While its primary purpose has historically been to control how links appear when shared on social media platforms like Facebook, LinkedIn, and X (formerly Twitter), Google has increasingly relied on it as a reliable fallback and cross-reference for Search and Discover. Google’s inclusion of og:image in its official documentation is a major win for publishers who already prioritize social media optimization. It means that the same effort put into making a post look “clickable” on social feeds will now directly benefit the page’s visibility in Google’s ecosystem. However, this also means that if your og:image is a generic site logo or a low-resolution placeholder, it could negatively impact your search presence. Optimizing for Google Discover: Higher Stakes for Visuals Google Discover is a unique beast compared to traditional search. It is a highly personalized, query-less feed that relies almost entirely on visual appeal to drive clicks. Because Discover is built around interests rather than intent, the thumbnail is often the only reason a user decides to engage with a piece of content. Google’s updated documentation for Discover emphasizes several strict technical and aesthetic requirements that go beyond basic SEO. The 1200px Width Standard One of the most critical takeaways from the Google Discover update is the emphasis on image size. Google recommends that images be at least 1200 pixels wide. This is not just a suggestion for quality; it is a prerequisite for appearing with a “large image” preview. Large images are statistically proven to generate higher engagement and visit rates from Discover than small, square thumbnails. To enable these large image previews, publishers must ensure they are using the max-image-preview:large robots meta tag or utilizing AMP (Accelerated Mobile Pages). Without this setting, Google may default to a small thumbnail, even if your image is high-resolution, which can lead to a significant drop in potential traffic. High Resolution and the 16×9 Aspect Ratio Google has specified that images in Discover should be high resolution, defined as having at least 300,000 total pixels (300K). Furthermore, a 16×9 aspect ratio is preferred. This widescreen format fits the modern smartphone display perfectly, providing a cinematic feel to the Discover feed. While Google does attempt to automatically crop images to fit this ratio, the documentation now warns that manual control is better. If you are cropping a vertical image (which is common in mobile photography) into a 16×9 landscape format, you must ensure that the most important details remain centered or appropriately framed. If the “meat” of the image is lost during an automated crop, the resulting thumbnail may be confusing or unappealing to users. Specifying a well-cropped version in your

Uncategorized

Own your branded search: Building a competitive PPC defense

In the high-stakes world of digital marketing, many brands fall into a dangerous trap: the belief that because they rank first organically for their own name, they don’t need to spend money on branded Pay-Per-Click (PPC) advertising. This “set it and forget it” mentality is a gift to your competitors. If you are not actively managing your branded search campaigns, you are essentially handing over your reputation and your revenue to rival brands, review aggregators, and affiliate marketers who are more than happy to intercept your most valuable traffic. Brand protection through PPC is far more nuanced than simply bidding on your company name. It is a multi-layered defensive strategy that involves query monitoring, ad copy experimentation, reputation management, and a deep understanding of the customer research journey. In this guide, we will explore how to build a world-class competitive PPC defense that ensures you own every stage of your branded search experience. Why Brand Search Deserves More Than Basic Defense Most PPC managers treat brand campaigns as a low-priority task. They set up a campaign, apply a handful of exact-match brand keywords, and let it run on autopilot. For smaller businesses, this might suffice. However, for established brands and companies in competitive tech or gaming niches, the reality is far more complex. Your brand exists across hundreds of different query contexts, each representing a unique stage of the buyer’s journey. When a user types your brand name into Google, they aren’t always looking for your login page. They might be asking “Is [Brand] worth the price?” or “Does [Brand] have [Feature X]?” If you only cover exact-match terms, you are leaving the door wide open for competitors to answer those questions for you. Third-party sites like G2, Capterra, or Reddit often dominate these “long-tail” branded queries. While these sites can provide social proof, they also feature prominent advertisements from your direct competitors, effectively siphoning off users who were already looking for you. Furthermore, the cost of losing a branded click is significantly higher than the cost of the bid itself. When a competitor intercepts a branded search, they aren’t just getting a lead; they are stealing a lead that you have already spent time and money to nurture through top-of-funnel marketing. Protecting these searches is about defending your brand equity and ensuring customer trust remains intact from the first click to the final conversion. 4 Categories of Branded Searches You Need to Cover To build a comprehensive defense, you must categorize branded searches based on user intent. Different intents require different messaging, bidding strategies, and landing page experiences. Broadly speaking, branded queries fall into four strategic buckets. Brand Trust and Reputation Queries These searchers are in the validation phase. They know who you are, but they are looking for a reason to say “yes.” Common queries include: “Is [Brand] good?” “[Brand] reviews” “Is [Brand] legit?” “Is [Brand] worth it?” The competitive threat here is high. Review aggregators and affiliate sites bid on these terms to capture traffic and redirect it to comparison pages where your competitors can pay for “top-tier” placement. To counter this, you must bid aggressively. Use review extensions and star ratings in your ads to provide immediate social proof. Instead of sending these users to your homepage, direct them to a dedicated “Why Choose Us” or “Customer Stories” page that highlights awards and testimonials. Product Features Queries In this category, users are evaluating whether your solution specifically meets their needs. They are looking for technical specifications or specific capabilities. Examples include: “What is [Brand] known for?” “Pros and cons of [Brand]” “Does [Brand] offer [feature]?” Competitors often target these queries with ads suggesting their features are superior or easier to use. Your PPC strategy should involve feature-specific ad groups. Use Headline 1 to address the specific feature the user is searching for, and use Sitelink Extensions to guide them toward detailed documentation or demo videos. This is your chance to prove you have the “best-in-class” solution for their specific problem. Comparison Queries Comparison queries are the most volatile and competitive. These users are actively weighing you against an alternative. They are at a crossroads, and a single persuasive ad could pull them in either direction. Common searches include: “Alternatives to [Brand]” “How does [Brand] compare?” “Is [Brand] better than [Competitor]?” “Is [Brand] right for [use case]?” In this space, you must bid to maintain Position 1. If you aren’t at the top of the page, your competitor’s “Why We’re Better” ad will be the first thing the user sees. Create dedicated comparison landing pages that offer transparent, honest feature tables. If your pricing is a competitive advantage, put it front and center. Monitor your Auction Insights report daily to see which competitors are getting aggressive on your name. Niche Questions These queries reveal specific barriers to entry, such as price concerns or security requirements. While lower in volume, they are incredibly high in intent. Examples include: “Is [Brand] expensive?” “Does [Brand] offer discounts?” “Is [Brand] secure?” Since these queries often have lower competition, you can sometimes maintain visibility with lower bids. However, the ad copy must be precise. If someone asks if you are expensive, your ad should highlight “High ROI” or “Transparent Pricing.” Use Search Query Reports to find these emerging questions and address them proactively before they become a narrative you can’t control. Advanced Brand Campaign Architecture A single, massive brand campaign is difficult to optimize. For a truly professional defense, you should segment your brand architecture into four specialized campaigns. Core Brand Defense This is your bedrock. It targets your exact brand name and common misspellings. The goal here is 95% to 100% impression share. This campaign should never be restricted by budget. If you run out of money here, you are essentially turning off your own sign. Use Responsive Search Ads (RSAs) to test different value propositions, and keep a close eye on “Lost IS (Rank)” to ensure your quality scores and bids are high enough to block out interlopers. Brand

Uncategorized

How to revise your old content for AI search optimization

If your brand has been producing digital content for several years, you are likely sitting on a goldmine of information. However, the way that information is accessed is changing fundamentally. We are moving away from an era defined solely by traditional Search Engine Optimization (SEO) and into the age of Answer Engine Optimization (AEO). While the two disciplines overlap, the rise of Large Language Models (LLMs) and AI-driven search features means your old content needs a fresh coat of paint to remain visible. I am frequently asked by brand marketers how they can gain traction in AI-generated answers. My favorite response is often the simplest: “Revise your old content.” This usually sparks an “aha” moment. Because AEO feels so futuristic, many people forget that the most valuable data an AI can find is often already living on their own servers, buried in blog posts from 2021 or white papers from 2022. The challenge lies in reformatting that legacy content so it is legible to AI systems while remaining engaging for human readers. How do you reformat content for better AEO performance? The transition from SEO to AEO requires a shift in mindset. Traditional SEO focused on helping a crawler index a page based on keywords. AEO focuses on helping an AI model “understand” and “retrieve” specific facts. When I approach a content refresh for AI optimization, I lean on three core principles: topical breadth and depth, chunk-level retrieval, and answer synthesis. Optimize for topical breadth and depth To succeed in an AI-driven search environment, your website must be viewed as an authority on its core subjects. The best way to achieve this is through a hub-and-spoke model. This structure organizes your site into logical clusters that AI models can easily map. For every primary category or keyword theme, you should build a comprehensive “hub” page. This page introduces the broader topic and serves as a central directory. From that hub, you link out to “spoke” pages—articles that dive deep into specific facets of the topic. Each spoke page should have a clear, distinct purpose and address a specific query intent. Because user questions in AI search often branch into niche directions, having a wide variety of “angles” covered helps expand your overall topical reach. By linking related spoke pages to one another and consistently back to the hub, you provide AI systems with clear signals about the semantic relationships between your topics. Optimize for chunk-level retrieval One of the most significant shifts in the AI era is how information is consumed. We can no longer rely on the AI model using the entire page for context. Instead, AI systems often use a process called Retrieval-Augmented Generation (RAG) to pull specific “chunks” of text to answer a user’s prompt. If your content is buried in long, rambling paragraphs, the AI might fail to extract the relevant data. To fix this, each section of your article should be independently understandable. Keep your passages semantically tight and self-contained. The goal is “one idea per section.” If an AI model “lifts” a single paragraph from your site to answer a question, that paragraph should contain all the necessary context to make sense on its own. Companies like Our Family Wizard have successfully implemented this by breaking complex topics into highly focused, bite-sized sections that are easy for both bots and humans to digest. Optimize for answer synthesis AI models are designed to summarize. You can make their job easier—and increase your chances of being cited—by doing the summarization for them. Start your sections with direct, concise sentences that answer a question immediately. Avoid “fluff” or introductory throat-clearing. A highly effective strategy is to include a “Summary” or “Key Takeaways” section at the top of long-form posts. This provides a “TL;DR” (Too Long; Didn’t Read) that an AI model can quickly synthesize. When formatting these summaries, favor a plain, factual, and non-promotional tone. AI models are trained to look for objective information, and overly “salesy” language can sometimes be filtered out or ignored in favor of more clinical sources. Baseten, for example, uses this approach by placing easily digested summaries at the top of their technical posts, providing a clear roadmap for any AI system scanning the page. For those looking to dive deeper into this concept, you can explore how to keep your content fresh in the age of AI to ensure your updates stay relevant as models evolve. How will humans react to that formatting? A common concern among marketers is that optimizing for AI will make their content unreadable for humans. However, the opposite is usually true. AI readability is fundamentally about clarity, and human readers—especially those browsing on mobile devices—crave clarity and speed. AI systems favor content where: Answers are explicitly named rather than vaguely inferred. Sections have a clear, singular intent. Key points can be understood without reading the entire document. In practice, this means being more explicit than traditional SEO ever required. You should define terms directly, summarize sections, and state your conclusions early. This is the antithesis of the old-school “keyword-stuffed” content that was often overwritten to meet an arbitrary word count that creators thought the Google algorithm preferred. By getting to the point quickly, you improve the experience for the human user who is looking for a quick answer. However, there is a risk: oversimplification. Not every page should be reduced to a single atomic answer. Content that is meant to be strategic, opinionated, or narrative still requires a certain flow. I try to strike a balance by following a specific hierarchy: Explain the core concept first. Elaborate with nuances later. Label your insights clearly. Provide proof or data to back them up. Make the answer obvious before adding layers of sophistication. When this balance is achieved, the content satisfies the AI’s need for data and the human’s need for context. But a word of caution: beware of the “AI look.” LLM-produced content has a very recognizable footprint—think of the generic posts saturating LinkedIn. You must

Uncategorized

Google publishes Universal Commerce Protocol help page

The landscape of digital commerce is undergoing a radical transformation, shifting from a model of discovery and referral to one of seamless, integrated transactions. In a significant move that signals the future of “agentic shopping,” Google has officially published a comprehensive help page detailing the Universal Commerce Protocol (UCP). This move provides merchants with the technical blueprint and operational guidance necessary to participate in Google’s evolving ecosystem, where the line between search and purchase is becoming increasingly blurred. The Universal Commerce Protocol is not just a minor update to the Google Merchant Center; it represents a fundamental shift in how Google handles transactions across its various surfaces, including Search, YouTube, and the AI-driven Gemini. By enabling a native checkout experience directly within Google’s environment, UCP aims to eliminate the friction that often leads to cart abandonment during the transition from a search engine to a merchant’s website. What is the Universal Commerce Protocol (UCP)? At its core, the Universal Commerce Protocol (UCP) is a standardized framework that allows merchants to offer a native “Buy” button on Google surfaces. Unlike traditional Google Shopping ads, which redirect users to a merchant’s website to complete a purchase, UCP-powered checkout allows the entire transaction to happen within the Google interface. However, a critical distinction remains: the merchant stays the “seller of record.” This means that while the user interacts with Google to select items and confirm payment, the merchant is still responsible for fulfillment, customer service, and handling returns. Google acts as the facilitator and the interface, but the legal and operational responsibility for the sale remains with the business selling the product. This hybrid model aims to provide the convenience of a marketplace with the brand autonomy of a direct-to-consumer store. How UCP-Powered Checkout Functions The UCP-powered checkout experience is designed to be as frictionless as possible. When a user finds a product they want to buy on a Google surface—whether through an AI Overview, a YouTube video, or a standard search result—they can click a native purchase button. This triggers a checkout flow that utilizes the user’s stored information in their Google account. Payments are processed using Google Wallet credentials. This is a strategic move for Google, as it leverages the millions of saved payment methods already stored in Google accounts worldwide. For the merchant, this means the technical infrastructure must be able to support Google Pay tokens. The transaction data is passed securely from Google to the merchant’s backend, where the order is processed as if it had occurred on the merchant’s own site. The Implementation of the native_commerce Attribute For merchants looking to activate this feature, the technical gateway lies within the Google Merchant Center. Specifically, Google has introduced the native_commerce attribute. By implementing this attribute in their product feeds, merchants signal to Google that their products are eligible for native checkout via UCP. This attribute serves as the “on switch” for the protocol. Without it, products will continue to be displayed as standard listings that redirect to external websites. The introduction of this attribute suggests that Google is moving toward a self-service model for native commerce, allowing any merchant with the right technical setup to opt-in to this high-conversion experience. The Shift Toward Agentic Shopping and Gemini Integration The timing of the UCP documentation release is no coincidence. Google has been aggressively pivoting toward “agentic” search—an approach where AI doesn’t just provide information but also completes tasks on behalf of the user. In the context of e-commerce, this means an AI agent like Gemini could research products, compare reviews, and eventually finalize the purchase for the user. Without UCP, an AI agent would have to “hand off” the user to a third-party website, where the AI might lose its ability to assist or track the transaction. With UCP, the entire funnel—from the initial query to the final confirmation—happens within an environment that the AI can navigate. This is particularly relevant for “AI Mode” in Google Search and the Gemini app, where a seamless “Buy” button makes the transition from conversation to conversion instantaneous. Reducing Friction in the Path to Purchase In digital marketing, friction is the enemy of conversion. Every extra click, every page load, and every form field a user has to fill out increases the likelihood that they will abandon their purchase. UCP addresses this by removing the need for a user to navigate a new website, create a new account, or manually enter credit card details on a mobile device. For mobile users, who often struggle with small-screen navigation and clunky checkout forms, UCP is a game-changer. By using Google Wallet and a unified interface, Google is essentially providing a “one-click” experience across the entire web for participating merchants. Merchant as the Seller of Record: Why It Matters One of the most important aspects of the UCP documentation is the clarification that the merchant remains the seller of record. This distinguishes UCP from traditional third-party marketplaces where the platform might take a more significant role in the transaction and the customer relationship. Being the seller of record has several implications for merchants: Tax Compliance: Merchants are responsible for calculating and remitting sales tax based on the customer’s location. Customer Data: While Google facilitates the transaction, the merchant still receives the necessary order data to build a relationship with the customer. Brand Experience: The merchant’s name is front and center during the transaction, ensuring that the brand identity isn’t lost within the Google ecosystem. Fulfillment Control: The merchant manages their own shipping carriers, packaging, and delivery timelines. This model is particularly attractive to mid-to-large scale retailers who want the reach of Google but are unwilling to give up control of their customer lifecycle to a marketplace entity. Technical Requirements and Payment Processing The new help page provides a roadmap for the technical requirements merchants must meet to support UCP. Beyond the native_commerce attribute, the most significant requirement is payment processor compatibility. Because UCP relies on Google Pay tokens, a merchant’s payment gateway must be capable of

Scroll to Top