Author name: aftabkhannewemail@gmail.com

Uncategorized

Adobe to shut down Marketo Engage SEO tool

Understanding the Deprecation of the Marketo Engage SEO Tool In a move that signals a significant shift in its product roadmap, Adobe has officially announced the upcoming shutdown of the native SEO tool within Marketo Engage. This decision, detailed in the February 2026 release notes, marks the end of an era for one of the platform’s legacy features. For digital marketers and demand generation professionals who have relied on Marketo for their end-to-end campaign management, this change necessitates a proactive approach to data preservation and a pivot toward more robust search engine optimization solutions. The SEO tool within Marketo Engage was designed to provide marketers with basic keyword tracking, inbound link analysis, and page-level optimization suggestions. However, as the digital marketing landscape has matured, the requirements for a competitive SEO strategy have evolved far beyond the capabilities of a secondary feature within a marketing automation platform (MAP). Adobe’s decision to sunset the tool reflects a broader industry trend of consolidating specialized tasks into dedicated, best-in-class software suites. Key Dates and Deadlines for Marketo Users For organizations currently utilizing the Marketo Engage SEO feature, there is a specific timeline that must be followed to ensure no critical historical data is lost. Adobe has set a hard deadline for the deprecation, giving users a window to transition their workflows. The SEO feature will be officially deprecated on March 31, 2026. Up until this date, users will continue to have access to the SEO tile within the Marketo interface. However, this is the final day to perform any administrative tasks or data exports related to the tool. On April 1, 2026, the SEO tile will be permanently removed from the platform, and all associated data that has not been exported will be inaccessible. Adobe recommends that administrators begin the export process as soon as possible. Because the tool tracked historical keyword rankings and site audits, this data can be invaluable for longitudinal reporting. Failing to secure these records before the March 31 cutoff could result in a significant gap in an organization’s marketing intelligence. Why Adobe Is Closing the SEO Chapter in Marketo The decision to remove a feature from a flagship product like Marketo Engage is never made in a vacuum. According to Adobe’s Keith Gluck, the primary driver behind this move is the desire to allow the Marketo Engage team to focus their development resources on high-impact areas of the platform. In the competitive world of SaaS, “feature creep”—the tendency to keep adding minor tools that eventually become difficult to maintain—can distract from core product innovation. Internal reports suggest that the SEO tool suffered from low adoption rates. Many Marketo users already utilized external, specialized platforms for their search strategy, leaving the native SEO tile largely unconfigured. By deprecating features that see minimal use, Adobe can streamline the user experience and dedicate more engineering power to lead scoring, attribution modeling, and AI-driven content personalization—areas where Marketo remains a market leader. The Impact of the Semrush Acquisition Perhaps the most significant reason for the shutdown is Adobe’s 2025 acquisition of Semrush. This strategic move fundamentally changed Adobe’s value proposition regarding search visibility. Semrush is widely regarded as one of the most comprehensive SEO and digital marketing suites available, offering deep insights into keyword research, backlink profiles, competitive intelligence, and technical site health. With Semrush now a part of the Adobe family, maintaining a basic, legacy SEO tool inside Marketo Engage no longer made strategic sense. It would have been redundant to invest in upgrading Marketo’s native SEO capabilities when the company now owns a platform that is purpose-built for that exact task. This acquisition provides Adobe customers with a path toward a much more powerful SEO experience, integrated within the broader Adobe Experience Cloud ecosystem. The Evolution of SEO in the Era of AI and LLMs The timing of this deprecation also coincides with a massive transformation in how search engines operate. The rise of Large Language Models (LLMs) and AI-powered search experiences (such as Google’s Search Generative Experience) has made traditional SEO more complex. Modern SEO is no longer just about tracking keyword positions; it involves understanding user intent, optimizing for conversational queries, and managing brand presence across various AI platforms. Legacy tools, like the one being removed from Marketo, were built for a “10 blue links” world. They struggle to provide meaningful insights into the nuances of modern, AI-driven search. By moving away from these older tools and leaning into the advanced analytics provided by platforms like Semrush, Adobe is positioning its users to better handle the volatility and complexity of the modern search landscape. How to Export Your Marketo SEO Data To prepare for the March 31, 2026 deadline, Marketo administrators should follow a structured data migration plan. The data within the SEO tool is typically divided into several categories, including keyword lists, page optimization scores, and competitor tracking. To preserve this information, users should navigate to the SEO area of Marketo Engage and look for the export options available in each view. It is advisable to export these files into a standardized format like CSV or Excel. Once the data is exported, it can be imported into a new SEO management platform or stored in a centralized marketing data warehouse for historical reference. Adobe has provided specific instructions through their Experience League community pages to assist users with the technical aspects of this export process. Transitioning to a Dedicated SEO Solution For organizations that were actively using Marketo for SEO, the sunsetting of the tool is an opportunity to upgrade their tech stack. While the native tool offered convenience, dedicated SEO platforms provide a level of depth that is necessary for modern B2B marketing. Here are the primary areas where a dedicated tool will offer an immediate upgrade: Advanced Keyword Research Unlike the basic tracking in Marketo, dedicated tools allow for deep keyword discovery, including “People Also Ask” data, search volume trends, and keyword difficulty scores. This allows marketers to build more effective content calendars based on

Uncategorized

Why your law firm’s best leads don’t convert after research

Why your law firm’s best leads don’t convert after research In the legal industry, a referral is often considered the gold standard of lead generation. When a former client or a colleague recommends your firm, the hard work of building trust is supposedly already done. The prospect arrives with a baseline of confidence, pre-sold on your expertise. However, a frustrating trend has emerged in recent years: high-quality referrals are entering the top of the funnel but failing to reach the consultation stage. They disappear after doing their own research. If your law firm is seeing a disconnect between the number of people who say they were referred to you and the number of people who actually sign a retainer, the problem likely lies in what is known as the referral validation gap. In the digital-first era, a recommendation is no longer the final step; it is the first. Today’s legal consumers are savvy researchers. They take that trusted recommendation and immediately head to Google, social media, and AI platforms to verify it. If your digital presence contradicts the high praise they received, the lead will vanish before you even know they existed. The referral validation gap represents the critical moments during online research where trust is either solidified or broken. While this phenomenon is particularly prevalent in the legal sector due to the high-stakes nature of the work, these dynamics apply to any professional service or referral-based business. To capture these high-value leads, firms must align their digital footprint with the expectations set by their referrers. The Four Types of Referral Validation Failure Referral loss is rarely accidental; it follows predictable patterns rooted in psychological friction and digital inconsistencies. By identifying where your firm falls short, you can implement specific technical and creative fixes to bridge the gap. We can categorize these failures into four primary areas: credibility, specificity, authority, and friction. 1. Credibility Gaps: The First Impression Crisis Psychological research suggests that website visitors form an opinion about a brand in less than three seconds. For a referred lead, this window is even more critical. They arrive with a mental image of a professional, authoritative, and successful firm based on the recommendation they received. If your website looks like it hasn’t been updated since 2012, or if it feels generic and cluttered, you create an immediate cognitive dissonance. A credibility gap occurs when your digital presence fails to reflect the quality of your legal work. Common culprits include thin attorney biographies, a lack of professional photography, and the use of “hollow” marketing speak. When a site relies on vague terms like “experienced” or “results-driven” without providing the proof to back them up, it triggers skepticism. The prospect’s thought process is simple: “If this lawyer is as good as my friend says, why is their website so unprofessional?” To fix credibility gaps, firms must focus on visual trust signals. This includes high-quality headshots, modern web design that prioritizes readability, and “above-the-fold” placement of credentials, awards, and case results. Technical performance is also a factor here. A slow-loading site or a broken mobile experience suggests a lack of attention to detail—a trait no one wants in their legal counsel. 2. Specificity Gaps: The Disconnect Between Problem and Solution Most legal referrals are highly specific. A client isn’t usually referred to a “general lawyer”; they are referred to a lawyer who is “the best at handling complex custody disputes” or “the expert in New York ground lease negotiations.” The problem is that many law firm websites are built to be broad, fearing that narrowing their focus will scare away other leads. When a prospect referred for a specific, painful problem lands on a generic homepage, they don’t see themselves or their issue reflected. If they have to hunt through menus to find a mention of their specific legal challenge, the momentum of the referral dies. They begin to wonder if the person who referred them was mistaken or if the firm has pivoted away from that specialty. Closing the specificity gap requires a robust content strategy that prioritizes practice area landing pages. Each page should speak directly to the nuances of that niche. For example, instead of a broad “Family Law” page, a firm might have detailed sub-pages for “High Net Worth Divorce” or “International Child Abduction.” These pages should feature specific case results and FAQs that address the exact questions a referred prospect is likely to have. If the prospect finds their specific problem described in detail within two clicks, the validation is successful. 3. Authority Gaps: Failing the AI and Third-Party Test In 2024 and beyond, validation happens beyond your own website. Prospects are increasingly using AI search tools like ChatGPT, Perplexity, and Google’s AI Overviews to “vet” their choices. They ask questions like, “Is [Firm Name] actually good at [Niche Specialty]?” or “Who are the top-rated trial lawyers for medical malpractice in Chicago?” If these AI tools cannot find structured, credible information about your firm, they will not confirm the referral. Worse, if a competitor has better-optimized content, the AI might suggest them as an alternative, even though the prospect was looking for you. This is the ultimate authority gap: when the “automated collective intelligence” of the internet fails to back up your human reputation. Authority is no longer just about what you say; it’s about what the digital ecosystem says about you. This involves technical SEO elements like Schema markup (LegalService, Attorney, and FAQ Schema), which helps AI and search engines understand the “entities” associated with your firm. It also involves “Share of Voice” in AI-generated answers. If your firm isn’t appearing in AI citations, you are effectively invisible during a crucial part of the research phase. 4. Friction Gaps: The Breakdown of the Conversion Path Friction gaps are perhaps the most tragic form of referral loss because they happen after the prospect has decided they want to hire you. They have validated your credibility, found your specific expertise, and confirmed your authority via search. They are

Uncategorized

7 ways to use storytelling in a business blog

SEO has evolved far beyond the era of simple shortcuts and quick wins. In the modern digital landscape, what drives sustainable results isn’t just the volume of content you produce—it’s content that earns attention, builds deep-seated trust, and ultimately converts a passive visitor into a loyal customer. As search engines like Google become increasingly sophisticated at identifying high-quality, human-centric information, the bridge between technical optimization and genuine user engagement has become narrower than ever. Storytelling plays a direct and pivotal role in this evolution. When used effectively, narrative techniques do more than just entertain; they improve engagement signals, strengthen topical relevance, and turn generic traffic into purposeful action. By weaving a narrative thread through your business blog, you move from being a mere information provider to a trusted authority that resonates with your audience on a psychological level. Here are seven storytelling techniques you can apply to your business blog to enhance your SEO performance and drive meaningful business outcomes. 7 storytelling techniques that drive engagement and conversions To master the art of the business blog, you must rethink how your content flows. From the opening hook that captures a wandering eye to the final call to action that seals the deal, every element should serve a narrative purpose. Use these techniques to shape your content into a compelling journey for your readers. 1. Hook the reader The legendary poet T.S. Eliot once famously remarked: “If you start with a bang, you won’t end with a whimper.” In the world of content marketing, this sentiment has never been more relevant. With millions of blog posts published every day, your introduction is the thin line between a high bounce rate and a successful conversion. Many modern authors recommend a technique called “In Media Res”—starting a story in the middle of the action and letting readers catch up as the narrative unfolds. While this is common in thrillers or memoirs, you might wonder how it applies to a B2B SaaS blog or a B2C e-commerce site. The truth is, you can still hook your reader using various professional techniques that create immediate intrigue: Challenge a commonly held belief: Bold statements like “The E-E-A-T model is flawed” or “Keyword research is dead” immediately demand attention because they trigger a cognitive dissonance that the reader wants to resolve. Start with a narrative: You don’t need to begin with “Once upon a time.” Instead, describe a specific day in the life of a frustrated manager or the exact moment a business realized its strategy was failing. Use a striking statistic: Numbers provide instant authority. For example, stating that “Google has 89.9% of search engine market share worldwide” provides a sense of scale and urgency that qualitative descriptions often lack. Make a bold promise: Address the reader’s desire directly. Ask them: “Would you like to write business blogs that drive organic traffic and convert visitors to customers?” Empathize with a reader’s problems: Start with a relatable pain point. “Do you struggle with writing business content your customers would actually want to read?” This establishes an immediate connection. Use a quote that epitomizes your message: A well-chosen quote from an industry leader or philosopher can set the thematic tone for the entire piece. Don’t be afraid to combine these techniques. For instance, you might start with a success story (narrative) that highlights a massive growth percentage (statistic) while empathizing with the struggle it took to get there. This layered approach is particularly effective for B2B blogs where trust is the primary currency. 2. Make promises and deliver on them Great stories are built on the foundation of foreshadowing. Whether it is a subtle hint in a mystery novel or the dramatic irony in a play, foreshadowing keeps the audience invested by promising a future payoff. Your business blog should operate on the same principle. To keep a reader moving down the page, you must build suspense. Use phrases like “In this guide, you will learn…” or “By the end of this article, you will discover the secret to…” This creates a mental “open loop” in the reader’s brain, which humans are naturally wired to want to close. Compelling language throughout the body of the post serves as the fuel that keeps them reading until they reach that promised solution. From an SEO perspective, this technique has a secondary, highly technical benefit. This is particularly important the first time you mention a keyword. Regardless of what you write for a meta description, Google often ignores your pre-written snippet and pulls text directly from the page—most commonly from the area where your primary keyword is first mentioned. If that first mention is part of a compelling promise about what your article or product will deliver, it significantly improves your click-through rate (CTR) from the search engine results page (SERP). For more on how to keep readers glued to your page, you can explore these 5 behavioral strategies to make your content more engaging. 3. Talk to your reader directly In literary circles, writers debate the merits of first-person (“I”) versus third-person (“They/He/She”) perspectives. However, business bloggers have a “secret weapon” that fiction writers often avoid: the second person (“You”). Directly addressing your reader creates an intimate, conversational atmosphere. It transforms a lecture into a consultation. Consider the psychological difference between these two statements: “We help our customers to achieve better SEO results.” “We will help you to achieve better SEO results.” The first statement is about the company; the second is about the reader. By centering the reader as the protagonist of the story, you make the content feel personal and actionable. Furthermore, there is a largely overlooked word in content marketing: “My.” While “You” works for the educational portion of the blog, “My” is incredibly powerful for calls to action (CTAs). In a story, the reader imagines themselves as the hero. A CTA that says “Start my free trial” or “Download my guide” reinforces that ownership. Experiment with this phrasing in your buttons and links—you may be

Uncategorized

How To Track AI Visibility & Prompts The Right Way via @sejournal, @lorenbaker

The digital marketing landscape is undergoing a tectonic shift. For decades, Search Engine Optimization (SEO) was a relatively straightforward game of keywords, backlinks, and technical health. However, with the rise of Large Language Models (LLMs) and AI-integrated search engines like Google’s Search Generative Experience (SGE), Bing Chat, Perplexity, and OpenAI’s SearchGPT, the rules have changed. It is no longer enough to track which position your website holds on a traditional Search Engine Results Page (SERP). Today, the most critical metric for forward-thinking brands is AI visibility. Understanding how AI models perceive your brand and how often they cite your content in response to user prompts is the next frontier of digital strategy. Tracking AI visibility and prompts allows marketers to move beyond simple rankings and into the realm of influence. To succeed in this new era, you must learn how to monitor, analyze, and optimize your presence within these black-box systems. The Evolution from Keywords to Prompts In traditional search, users enter short, fragmented queries like “best laptop 2024.” In the AI era, user behavior is shifting toward natural language prompts. A user might now ask, “I am a graphic designer looking for a lightweight laptop under $1,500 with a long battery life; what are my best options?” This shift from keywords to complex prompts changes everything for search professionals. Prompts are more conversational, specific, and intent-driven. Because they are more detailed, the responses generated by AI are highly personalized. If you aren’t tracking how AI models handle these specific prompts, you are missing out on a massive segment of the “search” journey. Tracking prompts means understanding the context in which your brand is being mentioned—or why it is being ignored. What is AI Visibility? AI visibility refers to the frequency and prominence with which your brand, product, or content appears in AI-generated responses. Unlike the traditional “10 blue links,” AI visibility is often bundled into a narrative. An AI might summarize three different articles to answer a user’s question. If your content provides the core facts for that summary, you have high visibility, even if the user never clicks through to your site. Tracking this visibility is essential for several reasons. First, it helps you understand your “Share of Model.” Much like Share of Voice, this tells you how much of the AI’s “mindshare” you own compared to competitors. Second, it identifies gaps in your content strategy. If an AI provides an answer that is factually incorrect about your brand or omits you entirely, it indicates a lack of authoritative data available for the model to ingest. Establishing a Framework for Tracking AI Prompts To track AI prompts effectively, you cannot rely on the same tools you use for Google Search Console. You need a specialized framework that accounts for the non-linear nature of AI interactions. Here is how to build that framework from the ground up. 1. Identify Your Core Prompt Categories Start by categorizing the types of prompts your target audience is likely to use. These generally fall into three buckets: Informational Prompts: Users asking for explanations, “how-to” guides, or definitions. (e.g., “How does cloud computing work?”) Comparative Prompts: Users weighing two or more options. (e.g., “Compare the iPhone 15 Pro vs. Samsung S24 Ultra.”) Transactional/Actionable Prompts: Users looking for a specific recommendation or a path to purchase. (e.g., “Find me a hotel in New York with a gym and free breakfast.”) By categorizing prompts, you can track which areas your brand excels in and where you are losing ground to competitors. 2. Monitoring Citation and Attribution One of the most valuable forms of AI visibility is the citation. When an AI model like Perplexity or SGE provides a source link, it is a direct endorsement of your authority. Tracking how often you are cited—and for which topics—is the new version of backlink monitoring. You should look for: Direct links to your articles. Brand mentions within the text (even without a link). The sentiment of the mention (positive, neutral, or negative). 3. Analyzing Answer Accuracy AI models are prone to hallucinations. Tracking prompts allows you to see if the AI is presenting your brand accurately. If you find that an LLM is consistently misrepresenting your pricing, features, or company history, you need to investigate your structured data and the clarity of your on-site content to ensure the model is “learning” the correct information. Tools and Methodologies for Measuring AI Presence Since this is a relatively new field, the tooling is still evolving. However, there are several ways to gather data on your AI visibility today. Manual “Secret Shopper” Testing The most basic way to track visibility is to manually interact with various AI models. Create a spreadsheet of your most important “money prompts” and run them through ChatGPT, Claude, Gemini, and Bing. Document whether your brand is mentioned, where the AI is getting its information, and the tone of the response. While time-consuming, this provides qualitative insights that automated tools might miss. Automated AI Tracking Platforms Newer SEO platforms are beginning to offer AI tracking modules. These tools simulate thousands of prompts and aggregate the data to show you your “AI Rank.” They can identify which pages are being used as sources most frequently and highlight when a competitor suddenly gains visibility in a specific niche. Analyzing Referral Traffic While some AI platforms do not pass through clear referral data, many do. Keep a close eye on your analytics for traffic coming from “openai.com,” “perplexity.ai,” or “google.com” (specifically looking for SGE-driven clicks). A spike in traffic from these sources indicates that your content is successfully triggering AI citations. The Importance of Contextual Prompt Engineering To track the “right way,” you must think like a prompt engineer. When testing your visibility, don’t just use one variation of a question. The way a prompt is phrased can significantly alter the AI’s output. This is known as “prompt sensitivity.” For example, if you are a SaaS company, track prompts like: “What is the best CRM for small businesses?” “Which

Uncategorized

AI Search Barely Cites Syndicated News Or Press Releases via @sejournal, @MattGSouthern

The digital marketing landscape is currently undergoing its most significant transformation since the invention of the search engine itself. As Artificial Intelligence (AI) begins to dominate how users find information, the traditional metrics of success—keyword rankings and backlink volume—are being replaced by a new, more elusive metric: the AI citation. For years, public relations professionals and SEO specialists have relied on syndication as a cornerstone of their strategy. The idea was simple: distribute a press release to hundreds of news outlets, gain a massive footprint of backlinks, and watch the authority of a brand grow. However, recent data suggests that in the age of AI search, this strategy is not just outdated; it is largely invisible. A comprehensive analysis of over four million AI search citations reveals a stark reality for digital marketers. Syndicated press releases, once the gold standard for broad distribution, barely register in the answers provided by AI search engines like Perplexity, Google’s AI Overviews, and ChatGPT. Instead, these platforms are showing a heavy preference for original editorial content and well-maintained, brand-owned newsrooms. This shift signals a fundamental change in how information must be packaged and published to survive the transition from traditional search to generative AI discovery. The Data Behind the Disconnect The study, which examined four million citations across various AI-driven search platforms, provides a granular look at what LLMs (Large Language Models) deem “worthy” of being cited. The findings indicate that while a press release might be picked up by 500 local news sites, the AI model typically identifies the content as duplicate information. Because AI models are designed to provide the most concise and authoritative answer possible, they have no reason to cite 500 identical versions of a story. They seek the primary source or the most comprehensive editorial analysis of that source. In the hierarchy of AI citations, syndicated content sits at the very bottom. The data shows that the “long tail” of syndication—those dozens or hundreds of small, automated news sites that republish wire service content—contributes almost zero visibility in AI-generated answers. This is a massive wake-up call for companies that have historically measured the success of a PR campaign by the number of “placements” achieved through wire services. Why AI Search Prefers Editorial Over Syndication To understand why AI search engines are snubbing syndicated news, we have to look at how these models are trained and how they retrieve information. AI search isn’t just looking for keywords; it is looking for “information gain.” Information gain is a concept where a piece of content provides new, unique, or more detailed information that wasn’t available in other sources. The Problem of Duplicate Content Syndicated press releases are, by definition, duplicate content. When a wire service blasts a release to 300 different domains, the text remains identical across all of them. For a traditional search engine like Google, canonical tags and sophisticated algorithms have long been used to filter out this noise. For an AI search engine, the goal is even more focused: find the single most authoritative version of a fact. If an AI model sees the same text on 300 sites, it will likely ignore 299 of them. If the original source is a generic PR wire, the AI may skip it entirely in favor of an editorial piece that adds context, expert quotes, and analysis. The Value of Context and Analysis Editorial content—written by journalists, industry experts, or specialized bloggers—fares much better in AI citations because it provides context. A press release might announce a new product, but an editorial piece explains how that product fits into the current market, compares it to competitors, and discusses its potential impact. AI models thrive on this connective tissue. They are designed to answer “why” and “how,” not just “what.” Because editorial content is unique and provides a narrative, it offers the “information gain” that LLMs prioritize when building a response for a user. The Rise of the Owned Newsroom One of the most interesting takeaways from the 4-million-citation study is the resilience of “owned newsrooms.” While syndicated versions of news fail, the original source published on a company’s own domain often manages to secure a citation. This highlights the growing importance of brand authority and the “source of truth.” When a company publishes an official statement, a white paper, or a detailed case study on its own “News” or “Insights” section, AI search engines recognize that domain as the primary source. This is particularly true if the brand has established E-E-A-T (Experience, Expertise, Authoritativeness, and Trust). In the eyes of an AI, citing the company that actually created the news is more logical than citing a third-party aggregator that simply republished it. Building a Newsroom for the AI Era For brands to capture AI search traffic, they must pivot from being “distributors” to being “publishers.” An AI-friendly newsroom is not just a list of PDFs or dry corporate announcements. It should include: Unique Data: AI models love statistics and original research. Publishing proprietary data is one of the fastest ways to earn a citation. Expert Perspectives: Content that includes quotes and insights from identifiable experts helps satisfy the “Expertise” component of E-E-A-T, which AI models use to weight sources. Structured Data: Using Schema markup helps AI crawlers understand the context of the news, the entities involved, and the date of publication. Comprehensive Coverage: Rather than a short 400-word blast, high-performing newsrooms publish deep dives that cover a topic from multiple angles. The Impact on Digital PR and SEO Strategy The revelation that syndicated news is ignored by AI search necessitates a total overhaul of digital PR strategies. For years, the industry has been incentivized to focus on volume. Agencies would report to clients that a story was “covered” by hundreds of outlets, even if those outlets were just automated subdomains of local news stations. In an AI-first world, this metric is a vanity metric with zero ROI. From Links to Citations In traditional SEO, a link from a syndicated site might

Uncategorized

Walmart: ChatGPT checkout converted 3x worse than website

The Reality Check for Agentic Commerce For the past year, the tech world has been buzzing with the promise of “agentic commerce”—a future where artificial intelligence doesn’t just suggest products but actually handles the entire transaction for you. The vision was simple: you tell ChatGPT you need ingredients for a dinner party or a new set of power tools, and the AI handles the search, the selection, and the checkout without you ever leaving the chat interface. However, recent data from Walmart, the world’s largest retailer, suggests that we are much further from that reality than many anticipated. In a revealing disclosure, Walmart confirmed that conversion rates for purchases made directly inside ChatGPT were three times lower than when users were directed to Walmart’s own website. This massive gap in performance highlights a critical friction point in the evolution of AI-driven shopping. While AI is excellent at discovery and curation, it is currently struggling to close the deal. For marketers, SEO professionals, and e-commerce platform owners, Walmart’s experience serves as a vital case study in why the “owned environment” still reigns supreme in the digital economy. Inside the Experiment: Walmart and OpenAI’s Instant Checkout The experiment began in earnest in November, when Walmart partnered with OpenAI to pilot a feature known as “Instant Checkout.” The initiative offered roughly 200,000 products that could be purchased natively within the ChatGPT interface. The goal was to remove the friction of jumping between apps and websites, creating a seamless “conversational” shopping experience. On paper, it seemed like a win-win. OpenAI could demonstrate the utility of its ecosystem for commerce, and Walmart could reach tech-forward consumers exactly where they were spending their time. However, the results were far from the revolutionary breakthrough both companies hoped for. Daniel Danker, Walmart’s Executive Vice President of Product and Design, did not mince words when describing the outcome. He noted that the in-chat purchases converted at only one-third the rate of traditional click-out transactions. More tellingly, Danker described the native AI checkout experience as “unsatisfying.” Why Instant Checkout Failed to Convert To understand why a 3x difference in conversion exists, we have to look at the psychology of the modern shopper and the technical limitations of current LLM (Large Language Model) interfaces. 1. The Lack of Visual Richness E-commerce is a visual medium. When a user visits Walmart.com, they are greeted with high-resolution images, video demonstrations, 360-degree product views, and detailed size charts. ChatGPT, by its nature, is a text-heavy interface. While it can display images, the rich, interactive experience of a dedicated retail site is difficult to replicate in a scrolling chat window. Shoppers often need that final visual confirmation before hitting “buy,” a step that feels less certain inside a third-party AI tool. 2. The Trust and Security Gap Entering credit card information and personal shipping details into a chatbot feels fundamentally different from doing so on a brand’s official website. Despite the security protocols in place, there is a lingering “trust gap” when it comes to agentic commerce. Consumers are comfortable asking ChatGPT for a recipe or a summary of a news article, but trusting it to handle a financial transaction with a third-party retailer introduces a new layer of hesitation. 3. Missing Social Proof and Nuance Walmart’s website is optimized for conversion through social proof—reviews, ratings, and “customers also bought” suggestions. While an AI can summarize reviews, the raw data of seeing thousands of verified purchases and reading specific user feedback provides a level of reassurance that a summarized AI response lacks. If the AI says, “This is a highly-rated drill,” it carries less weight than seeing 5,000 four-star reviews on the product page itself. The Death of OpenAI’s Instant Checkout The disappointing results from the Walmart partnership have had immediate consequences for OpenAI’s product roadmap. Earlier this month, OpenAI confirmed that it is phasing out the “Instant Checkout” feature entirely. This pivot marks a significant shift in how AI labs view commerce. Rather than trying to be the “everything store” that manages transactions internally, OpenAI is moving toward a model where the AI acts as a sophisticated lead generator, handing the final transaction back to the merchant. This is a victory for the traditional web and for brand-owned platforms. It suggests that for the foreseeable future, the “buy” button belongs on the retailer’s site, not in the LLM’s sidebar. Enter Sparky: Walmart’s New Strategy for AI Integration Walmart isn’t abandoning AI; it is simply changing how it integrates with it. The company is moving away from native ChatGPT checkouts and toward an “embedded” model. This involves the deployment of “Sparky,” Walmart’s own proprietary AI shopping assistant. Instead of a generic OpenAI checkout process, Sparky will be embedded within the ChatGPT ecosystem. This new approach changes the dynamic in several key ways: Syncing the Shopping Experience One of the biggest frustrations with the previous model was the lack of continuity. In the new version, users will log into their Walmart accounts through the interface. This allows for cart syncing across platforms. If you add an item to your cart via a conversation in ChatGPT, it will appear in your Walmart app and on the Walmart website. This creates a “persistent cart” that bridges the gap between AI discovery and traditional checkout. Merchant-Handled Transactions By moving the checkout back into Walmart’s system—even if it is triggered from within ChatGPT—Walmart regains control over the user experience. They can ensure that shipping options, loyalty points (like Walmart+), and promotional offers are applied correctly. This “app-based checkout” model is what OpenAI is now favoring for all its merchant partners. Multi-Platform Presence Walmart isn’t putting all its eggs in the OpenAI basket. The company confirmed that a similar integration with Google Gemini is slated for next month. By treating AI platforms as distribution channels rather than transaction hubs, Walmart is positioning itself to be present wherever the consumer starts their search journey. What This Means for SEO and Digital Marketing The Walmart/OpenAI data is a wake-up call for the

Uncategorized

Perplexity’s Comet for iOS uses Google Search by default

The Evolution of Perplexity: From Answer Engine to Full-Scale Browser In the rapidly shifting landscape of artificial intelligence, Perplexity has carved out a unique niche as the “answer engine” of choice for power users. However, the company is no longer content with being a simple destination for queries. With the launch of Comet for iOS, Perplexity is moving directly into the territory occupied by Safari and Google Chrome. Comet is not just an application with a search bar; it is a fully realized mobile browser designed to integrate large language models (LLMs) into the fabric of the daily browsing experience. The most striking aspect of this release is the strategic partnership—or rather, the technical reliance—on its primary competitor. Perplexity has confirmed that Comet for iOS uses Google Search as its default engine. For many in the tech industry, this seems like a tactical retreat, but a closer look at the mechanics of mobile search reveals a calculated move toward pragmatism over idealism. By leveraging Google’s established infrastructure for traditional queries while overlaying its own sophisticated AI assistant, Perplexity is attempting to create a “hybrid” browsing model that offers the best of both worlds. Why Comet Defaults to Google Search The decision to set Google as the default search provider within Comet was not made lightly. Aravind Srinivas, the CEO of Perplexity, has been transparent about the reasoning behind this choice. He notes that mobile queries are fundamentally different from desktop queries. When users are on their phones, they are often looking for immediate, actionable, and location-dependent information. These are categories where traditional search engines still hold a massive advantage over generative AI. Specifically, Google excels in three key areas that current LLMs struggle to replicate with high precision: navigation, local search, and transactional intent. If a user searches for “best coffee shop near me” or “track my UPS package,” Google’s massive database and real-time indexing provide an instant, accurate result. Perplexity’s AI, while excellent at synthesizing complex information, can sometimes struggle with the latency and hyper-local accuracy required for these “right now” moments. By using Google as the backbone for these types of queries, Comet ensures that users do not experience a drop in quality when switching from Safari or Chrome. It allows the browser to remain fast and reliable for everyday tasks while saving the “heavy lifting” of AI processing for queries that actually require intelligence and synthesis. The Hybrid Search Experience: How Comet Works Comet is designed to bridge the gap between the “old” web and the “new” AI-driven web. The interface provides traditional search engine results pages (SERPs) for fast, high-intent queries. If you search for a stock price or a weather forecast, Comet serves those results via Google’s engine. However, the Perplexity Assistant is always present, ready to layer advanced intelligence over the standard web experience. This hybrid approach addresses one of the biggest friction points in AI search: speed. Generative AI models take time to process and output text. For a user who just wants to find a website’s login page, waiting five seconds for an AI to write a paragraph is an annoyance. Comet solves this by defaulting to the “fast” path for simple lookups and offering the “deep” path for research and complex questions. The Role of the Perplexity Assistant Within the Comet environment, the Perplexity Assistant acts as a digital companion that lives inside the browser. It isn’t just a chatbot tucked away in a menu; it is integrated into the browsing flow. Users can summon the assistant to interact with the page they are currently viewing. For example, if you are reading a long-form investigative article, you can ask the assistant to summarize the key points or explain a specific concept mentioned in the third paragraph. The assistant can also take actions on your behalf. Perplexity has touted the browser’s ability to help with form fills, draft emails based on page content, and even assist with bookings. This moves the browser from a passive viewing tool to an active productivity agent, aligning with the broader industry trend of “AI agents” that can execute tasks rather than just provide information. Key Features of Comet for iOS Comet arrives with a suite of features that differentiate it from standard mobile browsers. These features are built on the premise that a browser should be more than a window to the web; it should be an intelligence tool. Voice-Enabled Browsing On mobile, typing can be a hurdle. Comet emphasizes voice interaction, allowing users to ask complex questions while they browse. This isn’t just basic voice-to-text; the system is designed to understand context. You can ask follow-up questions about a site you are currently visiting without having to re-specify the subject, making the experience feel more like a conversation and less like a series of disjointed searches. Deep Research and Cited Summaries One of Perplexity’s flagship features is “Deep Research,” which has been ported over to the Comet browser. When a user initiates a research task, the AI doesn’t just look at one source. It crawls multiple tabs, analyzes various perspectives, and generates a comprehensive summary with citations. This is particularly useful for students, professionals, and researchers who need to get up to speed on a topic quickly without manually clicking through twenty different Google results. Cross-Tab Synthesis One of the most innovative features of Comet is its ability to research across tabs. Traditional browsers treat tabs as silos—information in tab A has no relationship to information in tab B. Comet’s assistant can look across your open tabs to find connections, summarize common themes, or help you compare products across different retail sites. This is a significant leap forward in mobile productivity. SEO Implications: A New Era for Digital Marketers The launch of Comet and its reliance on Google Search creates a complex new environment for SEO professionals and digital marketers. For years, the industry has speculated that AI search would kill traditional SEO. However, Perplexity’s decision to use Google as a default suggests that

Uncategorized

Microsoft Advertising simplifies automated bidding setup

The Evolution of Bidding in Microsoft Advertising The digital advertising landscape is undergoing a significant transformation, driven largely by the rapid advancement of machine learning and artificial intelligence. Microsoft Advertising, a key player in the search and native advertising space, is staying at the forefront of this evolution by refining its platform to be more intuitive and efficient. Recently, Microsoft announced a strategic shift in how advertisers configure automated bidding, moving away from fragmented settings toward a more consolidated, goal-oriented framework. This update is not merely a cosmetic change to the user interface. It represents a fundamental philosophy in modern digital marketing: reducing manual complexity so that advertisers can focus on high-level strategy while the platform’s algorithms handle the granular execution. By folding familiar targets like Target CPA (Cost Per Acquisition) and Target ROAS (Return on Ad Spend) into broader automated strategies, Microsoft is streamlining the campaign creation process without sacrificing the power of its optimization engines. Simplifying the Automated Bidding Experience Historically, advertisers on Microsoft Advertising—and indeed many other platforms—faced an array of bidding options that could often feel redundant or confusing. You might have had to choose between “Maximize Conversions” and “Target CPA” as if they were entirely different animals. In reality, these strategies share a common goal: driving as many conversions as possible within specific parameters. Under the new simplified setup, Microsoft is consolidating these options into two core pillars based on the advertiser’s primary objective: 1. Maximize Conversions For advertisers whose primary goal is volume—generating the highest number of leads, sign-ups, or sales within a given budget—the “Maximize Conversions” strategy is the foundation. However, Microsoft recognizes that volume often needs a safety net. Therefore, Target CPA (tCPA) is now an optional layer within the Maximize Conversions framework. Instead of selecting tCPA as a standalone strategy, you simply choose Maximize Conversions and, if desired, input your target cost per acquisition. 2. Maximize Conversion Value For e-commerce businesses or service providers where not all conversions are equal, “Maximize Conversion Value” is the go-to approach. This strategy focuses on the total revenue or “value” generated by the campaign rather than just the raw count of conversions. Just as with the conversion-focused model, Target ROAS (tROAS) has been integrated as an optional setting. Advertisers can now select Maximize Conversion Value and define a specific return on ad spend goal within that selection. The Technical Logic: What Has (and Hasn’t) Changed? A common concern among seasoned PPC (Pay-Per-Click) managers when platforms “simplify” things is whether they are losing control or if the underlying algorithm is being altered. Microsoft has been clear on this front: the underlying bidding behavior remains exactly the same. The mathematical models, the data signals used (such as device, location, time of day, and intent), and the way the system bids in real-time auctions have not changed. The update is strictly focused on the configuration experience. By grouping these settings, Microsoft is ensuring that advertisers are thinking about their goals in a more structured way. If your goal is conversions, you start with the conversion strategy. If you have a specific price point you need to hit to remain profitable, you add the tCPA target. This hierarchy makes logical sense and aligns with how modern AI-driven bidding works best—by giving the machine a clear objective and a boundary to work within. Why Microsoft Is Making This Move Now The move toward simplification is part of a broader industry trend toward “standardization.” Google Ads made similar changes to its bidding structure several years ago, and by aligning its interface with industry standards, Microsoft makes it significantly easier for multi-platform advertisers to manage their campaigns. Here are several reasons why this shift is beneficial for the ecosystem: Reducing the Barrier to Entry For small business owners or new digital marketers, the sheer number of bidding options in a modern ad platform can be overwhelming. “Should I use Target CPA or Maximize Conversions?” is a common question that often leads to analysis paralysis. By presenting two clear paths—Conversions or Value—Microsoft lowers the barrier to entry, allowing users to get campaigns up and running faster and with more confidence. Consistency Across Accounts For agencies managing dozens or hundreds of accounts, consistency is key to efficiency. This update ensures that the setup process is uniform across all campaigns. It reduces the likelihood of human error where one campaign might be set to a legacy standalone tCPA setting while another is using a newer automated strategy, leading to fragmented reporting and optimization workflows. Focus on Machine Learning Efficiency Automated bidding thrives on data. By consolidating these strategies, Microsoft can potentially gather and process performance data more effectively across its network. When the system knows that a “Maximize Conversions” campaign with a target is fundamentally trying to achieve the same thing as one without a target (just with more constraints), it can apply its learnings more broadly, leading to faster “learning phases” for new campaigns. Practical Implications for Advertisers If you are currently managing Microsoft Advertising campaigns, you might be wondering how this affects your daily routine. The good news is that the transition is designed to be seamless. No Disruption to Existing Campaigns Microsoft has confirmed that any existing campaigns currently using Target CPA or Target ROAS as standalone settings will continue to run without interruption. You do not need to go in and manually update your current campaigns. They will maintain their performance goals and bidding logic. However, when you go to create a new campaign, you will see the new streamlined interface. Portfolio Bid Strategies Remain Intact For advanced advertisers who use Portfolio Bid Strategies to manage multiple campaigns under a single bidding goal, there is no change. These remain a powerful way to aggregate data across different campaign structures to fuel the bidding algorithm, and Microsoft is keeping this functionality as it is. Optionality Provides Continued Control It is important to emphasize that while the setup is simpler, the control is still there. Setting an optional Target CPA or Target

Uncategorized

Google expands its Universal Commerce Protocol to power AI-driven shopping

The Evolution of E-Commerce: From Search Queries to Autonomous Agents The landscape of digital commerce is undergoing a fundamental transformation. For decades, the process of online shopping has remained largely unchanged: a user types a query into a search engine, clicks through various links, compares prices manually, adds items to a cart, and navigates a checkout flow. However, Google is currently building the infrastructure to move beyond this manual process. By expanding its Universal Commerce Protocol (UCP), Google is laying the groundwork for what industry experts call “agentic commerce.” Agentic commerce refers to a future where AI agents—powered by large language models like Google Gemini—don’t just find products but actually perform the labor of shopping. These agents can evaluate reviews, compare technical specifications, apply discounts, and execute purchases on behalf of the user. To make this a reality, a bridge is needed between the AI’s reasoning capabilities and the retailer’s technical backend. That bridge is the Universal Commerce Protocol. Google’s latest updates to UCP represent a significant leap forward in making AI-driven shopping functional, scalable, and personalized. By introducing new cart capabilities, real-time catalog access, and identity linking, Google is ensuring that the transition from human-led browsing to agent-led buying is seamless for both the consumer and the merchant. What is the Universal Commerce Protocol (UCP)? The Universal Commerce Protocol is an open standard designed to streamline how retailers share data with AI platforms. In the past, every merchant had their own unique way of handling carts, inventory, and user accounts. For an AI agent to interact with thousands of different websites, it would traditionally need to “scrape” those sites, a process that is often slow, error-prone, and fragile. UCP solves this by providing a modular, standardized language. When a retailer adopts UCP, they are essentially providing a roadmap that an AI agent can read. This allows the agent to understand exactly how to add an item to a basket, how to check if a specific size is in stock, and how to apply a user’s loyalty rewards without a human ever having to click a button. This shift from “reading” a website to “interfacing” with a protocol is what will define the next decade of SEO and digital retail. New Features: Empowering the Next Generation of AI Agents Google’s recent expansion of the protocol introduces three critical features that address the most common friction points in automated shopping. These updates move the needle from simple product discovery to complex, multi-step transactions. Advanced Cart Capability One of the primary limitations of early AI shopping experiments was the “one-and-done” nature of the interaction. An agent might be able to find a single pair of shoes and send the user to a checkout page, but it struggled with the complexity of building a full shopping basket. The new cart capability allows agents to add or save multiple products from a single retailer in one go. This mirrors the way humans actually shop. A consumer rarely visits a grocery or electronics site for a single item; they build a list. With this update, a user could tell Gemini, “I’m planning a camping trip; find me a four-person tent, a portable stove, and two sleeping bags from a reputable outdoor brand.” The AI agent can now assemble that entire “basket” within the UCP framework, allowing the user to review the final total and check out in a single step. Real-Time Catalog Integration In e-commerce, data freshness is everything. There is nothing more frustrating for a consumer than being told an item is in stock by an AI, only to find it sold out upon reaching the checkout page. The UCP catalog feature gives agents direct access to real-time product data, including pricing, inventory levels, and specific product variants like color or size. This real-time link ensures that the AI agent is acting on the most current information available. It also allows the agent to handle more nuanced queries. Instead of just finding “a blue shirt,” the agent can confirm that the “Navy Blue Performance Polo” is available in “Large” at the “Downtown Seattle” location for a specific price. This level of accuracy is vital for building consumer trust in AI-led commerce. Identity Linking and Loyalty Preservation For retailers, the most valuable customers are those in their loyalty programs. Historically, shopping through third-party aggregators or search engines meant that these “logged-in” benefits were often lost. A customer might have a 10% member discount or qualify for free shipping, but if an AI agent is handling the search, those perks might not be applied. The new identity linking feature in UCP solves this problem. It allows shoppers to carry over their authenticated status to platforms connected through the protocol. This means that when an agent shops on behalf of a user, it does so using the user’s established profile. Member-only pricing, accumulated rewards points, and saved shipping preferences remain intact. This feature is a win-win: retailers maintain their direct relationship with the customer, and customers get the best possible deal without having to manually log in to every site they visit. The Strategic Importance for SEO and Digital Marketing For digital marketers and SEO professionals, the expansion of UCP signals a shift in priorities. While traditional organic ranking factors like backlinks and keyword density still matter, “data quality” is becoming the new gold standard. If an AI agent cannot verify your inventory or understand your pricing through a protocol like UCP, you effectively do not exist in the “agentic” search results. Visibility in the Age of Gemini Google has made it clear that these UCP capabilities will be integrated directly into its own ecosystem, specifically within Google Search and the Gemini app. As more users turn to Gemini for “help me buy” tasks, the products that show up will be those backed by robust, protocol-compliant data. This means that a retailer’s Merchant Center feed is no longer just a tool for Google Shopping ads; it is the fundamental data source for the AI agents that

Uncategorized

What patents reveal about the foundations of AI search

Every time a new large language model (LLM) is released or Google rolls out a significant update to its AI Overviews, the SEO industry tends to react with a mix of panic and excitement. We often witness a form of collective amnesia, where professionals scramble to optimize for “new” features that were actually outlined in patent offices over a decade ago. We become so fixated on the immediate future that we forget to look at the historical blueprints that describe exactly how these systems are built to function. To succeed in the landscape of 2026 and beyond, the most effective strategy isn’t just to be a futurist; it is to be an archaeologist. Understanding the foundations of AI search requires digging into the technical filings that preceded the current era of generative AI. By looking back at foundational patents, we can understand the long-standing rules of the game, and by looking ahead, we can see how modern computing power is finally allowing search engines to enforce those rules at scale. The archaeology of SEO: Why history repeats in search There is a persistent misconception that mastering AI search requires becoming a master prompt engineer or staying awake 24/7 to read every research paper from OpenAI or Anthropic. While staying current is helpful, the underlying logic governing today’s search “magic” is often based on mathematical frameworks established years ago. To truly understand search, we must look at the documents that defined the intent of the engineers long before the hardware could keep up with their vision. We cannot discuss patent research without honoring the legacy of the late Bill Slawski. For two decades, Slawski served as the SEO industry’s premier archaeologist. While the rest of the community was debating keyword density and backlink quantities, Slawski was dissecting dry, technical filings to predict the exact state of search we find ourselves in today. His work at SEO by the Sea proved that search engines provide a roadmap of their intentions years before those intentions become reality. Agent Rank (2007): The precursor to E-E-A-T Slawski analyzed the concept of “Agent Rank” nearly 20 years ago. This patent described a system of digital signatures that would connect content to specific authors, assigning them reputation scores based on the quality and reception of their work. At the time, the SEO community largely ignored it because the technology to implement it globally didn’t seem to exist. Fast forward to today, and we refer to this concept as E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness). Google didn’t just invent these guidelines recently; they finally acquired the processing power and the machine learning sophistication to run the numbers on author reputation. The “Agent” is the “E” and the “A,” and the patent was the blueprint. The Fact Repository (2006): The birth of answer engines Long before the Google Knowledge Graph became a household name in marketing, Slawski identified patents for a “Browseable Fact Repository.” This 2006 filing described a system for extracting facts from the web and storing them in a structured way that a machine could easily navigate. This logic is the primary engine behind modern “answer engines.” When an AI provides a direct answer, it isn’t “thinking” in the human sense; it is querying a repository of facts anchored by the principles laid out in the mid-2000s. The algorithm isn’t magic; it is mathematics applied to historical blueprints. If you want to understand why a new feature appears today, look at the filings from 2007 to 2016. That is where the engineering rules were established. Strategy vs. Mechanics: Moving from strings to verified things In the modern SEO landscape, it is easy to get buried under a mountain of buzzwords. To stay focused, it is helpful to categorize your efforts into two buckets: strategy and mechanics. The most significant shift we have seen in recent years is the move from “strings” to “things,” but in 2026, the baseline has shifted again. We have moved from simple entities (things) to verified entities (verified things). An entity—whether it is a person, a brand, or a concept—is essentially worthless in the eyes of an AI if the system cannot prove it is real. We can use a construction metaphor to understand this hierarchy: Semantic SEO is the architecture This is the vision for your digital presence. Semantic SEO is about ensuring the meaning of your content aligns with the user’s intent. It involves mapping out topics and ensuring that the context of your site provides a comprehensive answer to a user’s underlying questions. Entity SEO is the bricklaying Entities are the building blocks. By using distinct nouns and structured data, you build a site that a machine can parse. You are moving away from ambiguous keywords and toward specific, identifiable concepts that exist in the search engine’s knowledge base. Verification is the mortgage This is the step most SEOs currently overlook. Verification is about turning entities into findable, provable facts that are connected to a verified human or organization. If your content isn’t connected to a provable expert, it is viewed as “noise.” In an era where AI can generate infinite content, the only way for a search engine to maintain quality is to prioritize content that is anchored to a verifiable source. AEO vs. GEO: Understanding the nuance of AI search The industry often uses the terms Answer Engine Optimization (AEO) and Generative Engine Optimization (GEO) interchangeably, but they are fundamentally different. They require different content structures, serve different user needs, and are rooted in different technological approaches. Answer Engine Optimization (AEO) AEO is designed for the “direct answer.” This is the realm of voice assistants like Siri and Alexa, or the single, definitive snippet at the top of a search result. It is a binary system. The search engine is looking for a specific fact to fulfill a specific query. To succeed in AEO, you need “confidence anchors.” These are unnuanced, structured facts. Because the engine is “fetching” rather than “synthesizing,” it needs high-confidence data. If your

Scroll to Top