Uncategorized

Uncategorized

Cloudflare CEO: Bots could overtake human web usage by 2027

The Great Inversion: Why Bot Traffic is Set to Dominate the Web For decades, the internet has been a human-centric domain. We browse, we click, we consume, and we purchase. However, we are approaching a historic tipping point. According to Matthew Prince, the CEO of Cloudflare, the balance of power on the digital frontier is shifting rapidly. Speaking at the SXSW (South by Southwest) conference, Prince delivered a startling prediction: by 2027, AI bots and automated agents could officially outnumber human users on the web. This is not a projection based on the “junk” bot traffic of the past—the scrapers and spam bots that have always haunted the corners of the internet. Instead, this shift is being driven by the explosion of generative AI and sophisticated AI agents. These autonomous systems are designed to browse the web on behalf of humans, performing tasks, gathering data, and making decisions at a scale and speed that no biological user could ever match. From 20% to the Majority: The Escalation of Automated Traffic Historically, the internet has maintained a relatively stable ecosystem regarding traffic sources. For years, Cloudflare and other infrastructure providers noted that approximately 20% of web traffic was generated by bots. These ranged from search engine crawlers like Googlebot to malicious actors attempting credential stuffing or DDoS attacks. That baseline is now being demolished. Unlike the traffic spikes seen during the COVID-19 pandemic, which were temporary and driven by human behavioral shifts, the current rise in bot activity is a steady, structural climb. Prince notes that there is no sign of this trend slowing down. As AI becomes more integrated into our daily workflows, the “agent-driven” model of browsing is becoming the new standard. The Math of AI Browsing: 5 vs. 5,000 The primary reason for this massive surge lies in the fundamental difference between how a human researches a topic and how an AI agent performs the same task. When a human goes shopping for a new pair of running shoes, they might visit three to five websites, read a few reviews, and make a purchase. The “load” on the internet infrastructure is minimal. An AI agent, tasked with finding the “best possible running shoe for a marathon runner with high arches under $150,” does not stop at five sites. To provide a truly optimized answer, that agent may crawl, scrape, and analyze thousands of data points simultaneously. Prince pointed out that where a human visits five sites, an agent might hit 5,000. This represents a literal thousand-fold increase in web activity per “user” intent. The Death of the Traditional Click-Through Model For twenty years, the business model of the internet has been remarkably consistent: create high-quality content, drive human traffic to that content, and monetize that traffic through advertising or direct sales. This model relies entirely on the “click.” Prince warns that AI agents are systematically breaking this cycle. An AI bot does not click on a banner ad. It does not get distracted by a “recommended for you” sidebar. It does not have an emotional response to brand storytelling. Most importantly, the human using the AI agent often never sees the source material at all. As users transition from search engines to “answer engines,” they increasingly trust the synthesized output provided by the robot. The footnotes and source links are rarely clicked. This creates a crisis for publishers and marketers who rely on direct engagement to survive. If the “user” is a bot that filters out everything but the raw data, the traditional advertising-based economy faces an existential threat. Infrastructure and the Rise of AI Sandboxes The technical demands of this new era are also reshaping how the internet is built. Prince described a future where computing happens in “sandboxes”—temporary, isolated environments where AI agents can execute code and process information. In this vision, these sandboxes are not permanent fixtures. Instead, they are spun up and torn down in milliseconds. Prince estimates that these environments will be created millions of times per second to service the sheer volume of agent requests. This represents a massive shift in how server resources are allocated, moving away from static hosting toward a highly dynamic, hyper-scale compute model. For companies like Cloudflare, this means the pressure on global infrastructure is only going to intensify as these agents become the primary “residents” of the web. Disintermediation: The Erosion of Brand Loyalty One of the most profound impacts of the bot-dominated web is the “disintermediation” of the customer relationship. Historically, brands have spent billions of dollars building trust and emotional connections with their audience. This brand equity acts as a “shortcut” for human decision-making; we buy a specific brand because we know and trust it. AI agents, however, are immune to brand prestige. A bot optimizing for price, shipping speed, and material quality will choose the product that objectively meets those criteria, regardless of the logo on the box. Prince noted that AI agents “don’t care about brand.” They care about data and efficiency. For small businesses, this is a double-edged sword. On one hand, an AI agent might discover a small, high-quality boutique that a human searcher would have missed. On the other hand, the traditional “trust shortcuts” that small businesses have relied on—such as local reputation or personalized service—become harder to communicate to a robot that is only looking at structured data and price points. A New Revenue Path: Licensing vs. Advertising While the decline of ad revenue is a grim prospect for many publishers, Prince suggested that AI could offer a new, potentially more lucrative revenue stream: data licensing. Large Language Models (LLMs) and AI agents are hungry for unique, high-quality data. They have already scraped the “easy” parts of the web. What they need now is “unique local interesting information” that cannot be replicated by an algorithm. Prince cited local media as a primary example. A local newspaper covering city council meetings in a specific town provides data that is rare and highly valuable to an AI trying to

Uncategorized

SEO’s new battleground: Winning the consensus layer

You could be ranking in Position 1 and still be completely invisible. This sounds like a paradox, perhaps even an impossibility in the world of search engine optimization, but it is the defining reality of the current digital landscape. For decades, the goal was simple: win the top spot, earn the click, and convert the user. Today, that linear path is fracturing. Consider this scenario: A potential customer opens an AI interface like ChatGPT, Claude, or Perplexity. They ask, “What is the most reliable enterprise CRM for a mid-sized manufacturing firm?” The AI processes the request, scans its internal knowledge base and real-time web data, and provides a list of three recommendations. Your competitor is mentioned as the top choice. You are not mentioned at all. Meanwhile, back on the traditional Google Search Results Page (SERP), your website is sitting comfortably at the very top of the organic results for that exact query. In this new paradigm, your Number 1 ranking did absolutely nothing to help you capture that lead. This shift represents the emergence of the consensus layer—a new battleground where visibility is determined not by a single high-ranking page, but by the aggregate of information distributed across the web. To survive in an era of Generative Engine Optimization (GEO), marketers must understand that the game has moved from ranking to consensus. The Evolution from Retrieval to Synthesis Traditional SEO was built on a retrieval-based system. Google’s crawlers would index pages, and when a user searched for a keyword, the algorithm would retrieve the most relevant links. The user was the ultimate synthesizer; they would look at the blue links, click on a few, read the content, and form their own conclusion. In this model, being the first link was the ultimate prize because it commanded the highest probability of a click. AI-driven search functions differently. Systems like Google’s AI Overviews (SGE), ChatGPT, and Perplexity are synthesis-based. They don’t just find pages; they construct answers. They pull data points from dozens of different sources, identify which claims appear consistently across credible platforms, and generate a single, cohesive response. This process is powered by Retrieval-Augmented Generation (RAG), a technical architecture that allows Large Language Models (LLMs) to ground their answers in factual, up-to-date information from the web. The impact of this shift is measurable and stark. Since mid-2024, organic click-through rates (CTRs) for queries that trigger an AI Overview have plummeted by approximately 61%. Even more concerning for traditionalists is that even on queries where an AI Overview does not appear, organic CTRs have fallen by 41%. Users are becoming conditioned to find answers within the search interface or via direct AI chat, bypassing the traditional website visit entirely. If you aren’t part of the AI’s synthesized answer, you effectively do not exist for a growing segment of your audience. Understanding the Consensus Layer The consensus layer refers to the degree to which multiple, independent, and credible AI systems produce consistent outputs regarding your brand, products, or expertise. It is essentially pattern recognition at a global scale. When an AI “reads” the internet to answer a query, it looks for corroboration. If five different reputable industry journals, a hundred Reddit users, and a dozen expert blogs all describe your software as the “best for security,” the AI assigns a high confidence score to that claim. It becomes part of the “consensus.” AI systems are engineered to avoid hallucinations—the tendency to confidently state false information. Their primary defense against this is cross-referencing. If only one source (even a high-authority site) makes a specific claim, the AI may view it as an outlier and exclude it from the final answer to minimize risk. Conversely, if a claim is repeated across various independent domains, it is treated as a fact. This creates a new rule for modern marketing: isolated authority is no longer enough; you need distributed credibility. You can see this in action by looking at how AI cites its sources. A Semrush study recently revealed a shocking trend: nearly 9 out of 10 webpages cited by ChatGPT appear outside the top 20 organic results for those same queries. This proves that the criteria AI uses to “recommend” a site are fundamentally different from the criteria Google uses to “rank” a site. The AI isn’t looking for the best optimized page; it’s looking for the most corroborated answer. The Essential Signals of Consensus To win the consensus layer, you must influence the signals that AI models prioritize during the RAG process. While traditional SEO signals like backlinks and domain authority still matter, they are now merely the foundation rather than the finish line. The Power of Unlinked Brand Mentions For years, SEOs obsessed over the “link.” If a mention didn’t have a backlink, it was often dismissed as having little to no value. In the age of AI, this is a dangerous oversight. LLMs process text, not just link graphs. They scan the web for brand references, sentiment, and associations. An unlinked mention in a high-tier publication like The New York Times or a specialized industry journal serves as a massive consensus signal. It tells the AI that your brand is a recognized entity in a specific context. As search evolves, unlinked mentions are rapidly growing in importance as markers of brand authority. Publisher Diversity and Independent Validation In the old SEO playbook, getting ten links from the same high-authority site was a great way to boost a specific page. In the consensus model, this has diminishing returns. AI systems value diversity of sources. If your brand is only talked about on your own site and one partner site, there is no consensus. However, if you are mentioned across a diverse range of independent publishers—news sites, niche blogs, academic papers, and trade magazines—you signal to the AI that your authority is broad and undisputed across the industry. Community Platforms as Truth Signals Platforms like Reddit, Quora, and specialized niche forums have become “consensus gold.” AI models, particularly those developed by Google

Uncategorized

Adobe to shut down Marketo Engage SEO tool

Understanding the Deprecation of the Marketo Engage SEO Tool In a move that signals a significant shift in its product roadmap, Adobe has officially announced the upcoming shutdown of the native SEO tool within Marketo Engage. This decision, detailed in the February 2026 release notes, marks the end of an era for one of the platform’s legacy features. For digital marketers and demand generation professionals who have relied on Marketo for their end-to-end campaign management, this change necessitates a proactive approach to data preservation and a pivot toward more robust search engine optimization solutions. The SEO tool within Marketo Engage was designed to provide marketers with basic keyword tracking, inbound link analysis, and page-level optimization suggestions. However, as the digital marketing landscape has matured, the requirements for a competitive SEO strategy have evolved far beyond the capabilities of a secondary feature within a marketing automation platform (MAP). Adobe’s decision to sunset the tool reflects a broader industry trend of consolidating specialized tasks into dedicated, best-in-class software suites. Key Dates and Deadlines for Marketo Users For organizations currently utilizing the Marketo Engage SEO feature, there is a specific timeline that must be followed to ensure no critical historical data is lost. Adobe has set a hard deadline for the deprecation, giving users a window to transition their workflows. The SEO feature will be officially deprecated on March 31, 2026. Up until this date, users will continue to have access to the SEO tile within the Marketo interface. However, this is the final day to perform any administrative tasks or data exports related to the tool. On April 1, 2026, the SEO tile will be permanently removed from the platform, and all associated data that has not been exported will be inaccessible. Adobe recommends that administrators begin the export process as soon as possible. Because the tool tracked historical keyword rankings and site audits, this data can be invaluable for longitudinal reporting. Failing to secure these records before the March 31 cutoff could result in a significant gap in an organization’s marketing intelligence. Why Adobe Is Closing the SEO Chapter in Marketo The decision to remove a feature from a flagship product like Marketo Engage is never made in a vacuum. According to Adobe’s Keith Gluck, the primary driver behind this move is the desire to allow the Marketo Engage team to focus their development resources on high-impact areas of the platform. In the competitive world of SaaS, “feature creep”—the tendency to keep adding minor tools that eventually become difficult to maintain—can distract from core product innovation. Internal reports suggest that the SEO tool suffered from low adoption rates. Many Marketo users already utilized external, specialized platforms for their search strategy, leaving the native SEO tile largely unconfigured. By deprecating features that see minimal use, Adobe can streamline the user experience and dedicate more engineering power to lead scoring, attribution modeling, and AI-driven content personalization—areas where Marketo remains a market leader. The Impact of the Semrush Acquisition Perhaps the most significant reason for the shutdown is Adobe’s 2025 acquisition of Semrush. This strategic move fundamentally changed Adobe’s value proposition regarding search visibility. Semrush is widely regarded as one of the most comprehensive SEO and digital marketing suites available, offering deep insights into keyword research, backlink profiles, competitive intelligence, and technical site health. With Semrush now a part of the Adobe family, maintaining a basic, legacy SEO tool inside Marketo Engage no longer made strategic sense. It would have been redundant to invest in upgrading Marketo’s native SEO capabilities when the company now owns a platform that is purpose-built for that exact task. This acquisition provides Adobe customers with a path toward a much more powerful SEO experience, integrated within the broader Adobe Experience Cloud ecosystem. The Evolution of SEO in the Era of AI and LLMs The timing of this deprecation also coincides with a massive transformation in how search engines operate. The rise of Large Language Models (LLMs) and AI-powered search experiences (such as Google’s Search Generative Experience) has made traditional SEO more complex. Modern SEO is no longer just about tracking keyword positions; it involves understanding user intent, optimizing for conversational queries, and managing brand presence across various AI platforms. Legacy tools, like the one being removed from Marketo, were built for a “10 blue links” world. They struggle to provide meaningful insights into the nuances of modern, AI-driven search. By moving away from these older tools and leaning into the advanced analytics provided by platforms like Semrush, Adobe is positioning its users to better handle the volatility and complexity of the modern search landscape. How to Export Your Marketo SEO Data To prepare for the March 31, 2026 deadline, Marketo administrators should follow a structured data migration plan. The data within the SEO tool is typically divided into several categories, including keyword lists, page optimization scores, and competitor tracking. To preserve this information, users should navigate to the SEO area of Marketo Engage and look for the export options available in each view. It is advisable to export these files into a standardized format like CSV or Excel. Once the data is exported, it can be imported into a new SEO management platform or stored in a centralized marketing data warehouse for historical reference. Adobe has provided specific instructions through their Experience League community pages to assist users with the technical aspects of this export process. Transitioning to a Dedicated SEO Solution For organizations that were actively using Marketo for SEO, the sunsetting of the tool is an opportunity to upgrade their tech stack. While the native tool offered convenience, dedicated SEO platforms provide a level of depth that is necessary for modern B2B marketing. Here are the primary areas where a dedicated tool will offer an immediate upgrade: Advanced Keyword Research Unlike the basic tracking in Marketo, dedicated tools allow for deep keyword discovery, including “People Also Ask” data, search volume trends, and keyword difficulty scores. This allows marketers to build more effective content calendars based on

Uncategorized

Why your law firm’s best leads don’t convert after research

Why your law firm’s best leads don’t convert after research In the legal industry, a referral is often considered the gold standard of lead generation. When a former client or a colleague recommends your firm, the hard work of building trust is supposedly already done. The prospect arrives with a baseline of confidence, pre-sold on your expertise. However, a frustrating trend has emerged in recent years: high-quality referrals are entering the top of the funnel but failing to reach the consultation stage. They disappear after doing their own research. If your law firm is seeing a disconnect between the number of people who say they were referred to you and the number of people who actually sign a retainer, the problem likely lies in what is known as the referral validation gap. In the digital-first era, a recommendation is no longer the final step; it is the first. Today’s legal consumers are savvy researchers. They take that trusted recommendation and immediately head to Google, social media, and AI platforms to verify it. If your digital presence contradicts the high praise they received, the lead will vanish before you even know they existed. The referral validation gap represents the critical moments during online research where trust is either solidified or broken. While this phenomenon is particularly prevalent in the legal sector due to the high-stakes nature of the work, these dynamics apply to any professional service or referral-based business. To capture these high-value leads, firms must align their digital footprint with the expectations set by their referrers. The Four Types of Referral Validation Failure Referral loss is rarely accidental; it follows predictable patterns rooted in psychological friction and digital inconsistencies. By identifying where your firm falls short, you can implement specific technical and creative fixes to bridge the gap. We can categorize these failures into four primary areas: credibility, specificity, authority, and friction. 1. Credibility Gaps: The First Impression Crisis Psychological research suggests that website visitors form an opinion about a brand in less than three seconds. For a referred lead, this window is even more critical. They arrive with a mental image of a professional, authoritative, and successful firm based on the recommendation they received. If your website looks like it hasn’t been updated since 2012, or if it feels generic and cluttered, you create an immediate cognitive dissonance. A credibility gap occurs when your digital presence fails to reflect the quality of your legal work. Common culprits include thin attorney biographies, a lack of professional photography, and the use of “hollow” marketing speak. When a site relies on vague terms like “experienced” or “results-driven” without providing the proof to back them up, it triggers skepticism. The prospect’s thought process is simple: “If this lawyer is as good as my friend says, why is their website so unprofessional?” To fix credibility gaps, firms must focus on visual trust signals. This includes high-quality headshots, modern web design that prioritizes readability, and “above-the-fold” placement of credentials, awards, and case results. Technical performance is also a factor here. A slow-loading site or a broken mobile experience suggests a lack of attention to detail—a trait no one wants in their legal counsel. 2. Specificity Gaps: The Disconnect Between Problem and Solution Most legal referrals are highly specific. A client isn’t usually referred to a “general lawyer”; they are referred to a lawyer who is “the best at handling complex custody disputes” or “the expert in New York ground lease negotiations.” The problem is that many law firm websites are built to be broad, fearing that narrowing their focus will scare away other leads. When a prospect referred for a specific, painful problem lands on a generic homepage, they don’t see themselves or their issue reflected. If they have to hunt through menus to find a mention of their specific legal challenge, the momentum of the referral dies. They begin to wonder if the person who referred them was mistaken or if the firm has pivoted away from that specialty. Closing the specificity gap requires a robust content strategy that prioritizes practice area landing pages. Each page should speak directly to the nuances of that niche. For example, instead of a broad “Family Law” page, a firm might have detailed sub-pages for “High Net Worth Divorce” or “International Child Abduction.” These pages should feature specific case results and FAQs that address the exact questions a referred prospect is likely to have. If the prospect finds their specific problem described in detail within two clicks, the validation is successful. 3. Authority Gaps: Failing the AI and Third-Party Test In 2024 and beyond, validation happens beyond your own website. Prospects are increasingly using AI search tools like ChatGPT, Perplexity, and Google’s AI Overviews to “vet” their choices. They ask questions like, “Is [Firm Name] actually good at [Niche Specialty]?” or “Who are the top-rated trial lawyers for medical malpractice in Chicago?” If these AI tools cannot find structured, credible information about your firm, they will not confirm the referral. Worse, if a competitor has better-optimized content, the AI might suggest them as an alternative, even though the prospect was looking for you. This is the ultimate authority gap: when the “automated collective intelligence” of the internet fails to back up your human reputation. Authority is no longer just about what you say; it’s about what the digital ecosystem says about you. This involves technical SEO elements like Schema markup (LegalService, Attorney, and FAQ Schema), which helps AI and search engines understand the “entities” associated with your firm. It also involves “Share of Voice” in AI-generated answers. If your firm isn’t appearing in AI citations, you are effectively invisible during a crucial part of the research phase. 4. Friction Gaps: The Breakdown of the Conversion Path Friction gaps are perhaps the most tragic form of referral loss because they happen after the prospect has decided they want to hire you. They have validated your credibility, found your specific expertise, and confirmed your authority via search. They are

Uncategorized

7 ways to use storytelling in a business blog

SEO has evolved far beyond the era of simple shortcuts and quick wins. In the modern digital landscape, what drives sustainable results isn’t just the volume of content you produce—it’s content that earns attention, builds deep-seated trust, and ultimately converts a passive visitor into a loyal customer. As search engines like Google become increasingly sophisticated at identifying high-quality, human-centric information, the bridge between technical optimization and genuine user engagement has become narrower than ever. Storytelling plays a direct and pivotal role in this evolution. When used effectively, narrative techniques do more than just entertain; they improve engagement signals, strengthen topical relevance, and turn generic traffic into purposeful action. By weaving a narrative thread through your business blog, you move from being a mere information provider to a trusted authority that resonates with your audience on a psychological level. Here are seven storytelling techniques you can apply to your business blog to enhance your SEO performance and drive meaningful business outcomes. 7 storytelling techniques that drive engagement and conversions To master the art of the business blog, you must rethink how your content flows. From the opening hook that captures a wandering eye to the final call to action that seals the deal, every element should serve a narrative purpose. Use these techniques to shape your content into a compelling journey for your readers. 1. Hook the reader The legendary poet T.S. Eliot once famously remarked: “If you start with a bang, you won’t end with a whimper.” In the world of content marketing, this sentiment has never been more relevant. With millions of blog posts published every day, your introduction is the thin line between a high bounce rate and a successful conversion. Many modern authors recommend a technique called “In Media Res”—starting a story in the middle of the action and letting readers catch up as the narrative unfolds. While this is common in thrillers or memoirs, you might wonder how it applies to a B2B SaaS blog or a B2C e-commerce site. The truth is, you can still hook your reader using various professional techniques that create immediate intrigue: Challenge a commonly held belief: Bold statements like “The E-E-A-T model is flawed” or “Keyword research is dead” immediately demand attention because they trigger a cognitive dissonance that the reader wants to resolve. Start with a narrative: You don’t need to begin with “Once upon a time.” Instead, describe a specific day in the life of a frustrated manager or the exact moment a business realized its strategy was failing. Use a striking statistic: Numbers provide instant authority. For example, stating that “Google has 89.9% of search engine market share worldwide” provides a sense of scale and urgency that qualitative descriptions often lack. Make a bold promise: Address the reader’s desire directly. Ask them: “Would you like to write business blogs that drive organic traffic and convert visitors to customers?” Empathize with a reader’s problems: Start with a relatable pain point. “Do you struggle with writing business content your customers would actually want to read?” This establishes an immediate connection. Use a quote that epitomizes your message: A well-chosen quote from an industry leader or philosopher can set the thematic tone for the entire piece. Don’t be afraid to combine these techniques. For instance, you might start with a success story (narrative) that highlights a massive growth percentage (statistic) while empathizing with the struggle it took to get there. This layered approach is particularly effective for B2B blogs where trust is the primary currency. 2. Make promises and deliver on them Great stories are built on the foundation of foreshadowing. Whether it is a subtle hint in a mystery novel or the dramatic irony in a play, foreshadowing keeps the audience invested by promising a future payoff. Your business blog should operate on the same principle. To keep a reader moving down the page, you must build suspense. Use phrases like “In this guide, you will learn…” or “By the end of this article, you will discover the secret to…” This creates a mental “open loop” in the reader’s brain, which humans are naturally wired to want to close. Compelling language throughout the body of the post serves as the fuel that keeps them reading until they reach that promised solution. From an SEO perspective, this technique has a secondary, highly technical benefit. This is particularly important the first time you mention a keyword. Regardless of what you write for a meta description, Google often ignores your pre-written snippet and pulls text directly from the page—most commonly from the area where your primary keyword is first mentioned. If that first mention is part of a compelling promise about what your article or product will deliver, it significantly improves your click-through rate (CTR) from the search engine results page (SERP). For more on how to keep readers glued to your page, you can explore these 5 behavioral strategies to make your content more engaging. 3. Talk to your reader directly In literary circles, writers debate the merits of first-person (“I”) versus third-person (“They/He/She”) perspectives. However, business bloggers have a “secret weapon” that fiction writers often avoid: the second person (“You”). Directly addressing your reader creates an intimate, conversational atmosphere. It transforms a lecture into a consultation. Consider the psychological difference between these two statements: “We help our customers to achieve better SEO results.” “We will help you to achieve better SEO results.” The first statement is about the company; the second is about the reader. By centering the reader as the protagonist of the story, you make the content feel personal and actionable. Furthermore, there is a largely overlooked word in content marketing: “My.” While “You” works for the educational portion of the blog, “My” is incredibly powerful for calls to action (CTAs). In a story, the reader imagines themselves as the hero. A CTA that says “Start my free trial” or “Download my guide” reinforces that ownership. Experiment with this phrasing in your buttons and links—you may be

Uncategorized

How To Track AI Visibility & Prompts The Right Way via @sejournal, @lorenbaker

The digital marketing landscape is undergoing a tectonic shift. For decades, Search Engine Optimization (SEO) was a relatively straightforward game of keywords, backlinks, and technical health. However, with the rise of Large Language Models (LLMs) and AI-integrated search engines like Google’s Search Generative Experience (SGE), Bing Chat, Perplexity, and OpenAI’s SearchGPT, the rules have changed. It is no longer enough to track which position your website holds on a traditional Search Engine Results Page (SERP). Today, the most critical metric for forward-thinking brands is AI visibility. Understanding how AI models perceive your brand and how often they cite your content in response to user prompts is the next frontier of digital strategy. Tracking AI visibility and prompts allows marketers to move beyond simple rankings and into the realm of influence. To succeed in this new era, you must learn how to monitor, analyze, and optimize your presence within these black-box systems. The Evolution from Keywords to Prompts In traditional search, users enter short, fragmented queries like “best laptop 2024.” In the AI era, user behavior is shifting toward natural language prompts. A user might now ask, “I am a graphic designer looking for a lightweight laptop under $1,500 with a long battery life; what are my best options?” This shift from keywords to complex prompts changes everything for search professionals. Prompts are more conversational, specific, and intent-driven. Because they are more detailed, the responses generated by AI are highly personalized. If you aren’t tracking how AI models handle these specific prompts, you are missing out on a massive segment of the “search” journey. Tracking prompts means understanding the context in which your brand is being mentioned—or why it is being ignored. What is AI Visibility? AI visibility refers to the frequency and prominence with which your brand, product, or content appears in AI-generated responses. Unlike the traditional “10 blue links,” AI visibility is often bundled into a narrative. An AI might summarize three different articles to answer a user’s question. If your content provides the core facts for that summary, you have high visibility, even if the user never clicks through to your site. Tracking this visibility is essential for several reasons. First, it helps you understand your “Share of Model.” Much like Share of Voice, this tells you how much of the AI’s “mindshare” you own compared to competitors. Second, it identifies gaps in your content strategy. If an AI provides an answer that is factually incorrect about your brand or omits you entirely, it indicates a lack of authoritative data available for the model to ingest. Establishing a Framework for Tracking AI Prompts To track AI prompts effectively, you cannot rely on the same tools you use for Google Search Console. You need a specialized framework that accounts for the non-linear nature of AI interactions. Here is how to build that framework from the ground up. 1. Identify Your Core Prompt Categories Start by categorizing the types of prompts your target audience is likely to use. These generally fall into three buckets: Informational Prompts: Users asking for explanations, “how-to” guides, or definitions. (e.g., “How does cloud computing work?”) Comparative Prompts: Users weighing two or more options. (e.g., “Compare the iPhone 15 Pro vs. Samsung S24 Ultra.”) Transactional/Actionable Prompts: Users looking for a specific recommendation or a path to purchase. (e.g., “Find me a hotel in New York with a gym and free breakfast.”) By categorizing prompts, you can track which areas your brand excels in and where you are losing ground to competitors. 2. Monitoring Citation and Attribution One of the most valuable forms of AI visibility is the citation. When an AI model like Perplexity or SGE provides a source link, it is a direct endorsement of your authority. Tracking how often you are cited—and for which topics—is the new version of backlink monitoring. You should look for: Direct links to your articles. Brand mentions within the text (even without a link). The sentiment of the mention (positive, neutral, or negative). 3. Analyzing Answer Accuracy AI models are prone to hallucinations. Tracking prompts allows you to see if the AI is presenting your brand accurately. If you find that an LLM is consistently misrepresenting your pricing, features, or company history, you need to investigate your structured data and the clarity of your on-site content to ensure the model is “learning” the correct information. Tools and Methodologies for Measuring AI Presence Since this is a relatively new field, the tooling is still evolving. However, there are several ways to gather data on your AI visibility today. Manual “Secret Shopper” Testing The most basic way to track visibility is to manually interact with various AI models. Create a spreadsheet of your most important “money prompts” and run them through ChatGPT, Claude, Gemini, and Bing. Document whether your brand is mentioned, where the AI is getting its information, and the tone of the response. While time-consuming, this provides qualitative insights that automated tools might miss. Automated AI Tracking Platforms Newer SEO platforms are beginning to offer AI tracking modules. These tools simulate thousands of prompts and aggregate the data to show you your “AI Rank.” They can identify which pages are being used as sources most frequently and highlight when a competitor suddenly gains visibility in a specific niche. Analyzing Referral Traffic While some AI platforms do not pass through clear referral data, many do. Keep a close eye on your analytics for traffic coming from “openai.com,” “perplexity.ai,” or “google.com” (specifically looking for SGE-driven clicks). A spike in traffic from these sources indicates that your content is successfully triggering AI citations. The Importance of Contextual Prompt Engineering To track the “right way,” you must think like a prompt engineer. When testing your visibility, don’t just use one variation of a question. The way a prompt is phrased can significantly alter the AI’s output. This is known as “prompt sensitivity.” For example, if you are a SaaS company, track prompts like: “What is the best CRM for small businesses?” “Which

Uncategorized

AI Search Barely Cites Syndicated News Or Press Releases via @sejournal, @MattGSouthern

The digital marketing landscape is currently undergoing its most significant transformation since the invention of the search engine itself. As Artificial Intelligence (AI) begins to dominate how users find information, the traditional metrics of success—keyword rankings and backlink volume—are being replaced by a new, more elusive metric: the AI citation. For years, public relations professionals and SEO specialists have relied on syndication as a cornerstone of their strategy. The idea was simple: distribute a press release to hundreds of news outlets, gain a massive footprint of backlinks, and watch the authority of a brand grow. However, recent data suggests that in the age of AI search, this strategy is not just outdated; it is largely invisible. A comprehensive analysis of over four million AI search citations reveals a stark reality for digital marketers. Syndicated press releases, once the gold standard for broad distribution, barely register in the answers provided by AI search engines like Perplexity, Google’s AI Overviews, and ChatGPT. Instead, these platforms are showing a heavy preference for original editorial content and well-maintained, brand-owned newsrooms. This shift signals a fundamental change in how information must be packaged and published to survive the transition from traditional search to generative AI discovery. The Data Behind the Disconnect The study, which examined four million citations across various AI-driven search platforms, provides a granular look at what LLMs (Large Language Models) deem “worthy” of being cited. The findings indicate that while a press release might be picked up by 500 local news sites, the AI model typically identifies the content as duplicate information. Because AI models are designed to provide the most concise and authoritative answer possible, they have no reason to cite 500 identical versions of a story. They seek the primary source or the most comprehensive editorial analysis of that source. In the hierarchy of AI citations, syndicated content sits at the very bottom. The data shows that the “long tail” of syndication—those dozens or hundreds of small, automated news sites that republish wire service content—contributes almost zero visibility in AI-generated answers. This is a massive wake-up call for companies that have historically measured the success of a PR campaign by the number of “placements” achieved through wire services. Why AI Search Prefers Editorial Over Syndication To understand why AI search engines are snubbing syndicated news, we have to look at how these models are trained and how they retrieve information. AI search isn’t just looking for keywords; it is looking for “information gain.” Information gain is a concept where a piece of content provides new, unique, or more detailed information that wasn’t available in other sources. The Problem of Duplicate Content Syndicated press releases are, by definition, duplicate content. When a wire service blasts a release to 300 different domains, the text remains identical across all of them. For a traditional search engine like Google, canonical tags and sophisticated algorithms have long been used to filter out this noise. For an AI search engine, the goal is even more focused: find the single most authoritative version of a fact. If an AI model sees the same text on 300 sites, it will likely ignore 299 of them. If the original source is a generic PR wire, the AI may skip it entirely in favor of an editorial piece that adds context, expert quotes, and analysis. The Value of Context and Analysis Editorial content—written by journalists, industry experts, or specialized bloggers—fares much better in AI citations because it provides context. A press release might announce a new product, but an editorial piece explains how that product fits into the current market, compares it to competitors, and discusses its potential impact. AI models thrive on this connective tissue. They are designed to answer “why” and “how,” not just “what.” Because editorial content is unique and provides a narrative, it offers the “information gain” that LLMs prioritize when building a response for a user. The Rise of the Owned Newsroom One of the most interesting takeaways from the 4-million-citation study is the resilience of “owned newsrooms.” While syndicated versions of news fail, the original source published on a company’s own domain often manages to secure a citation. This highlights the growing importance of brand authority and the “source of truth.” When a company publishes an official statement, a white paper, or a detailed case study on its own “News” or “Insights” section, AI search engines recognize that domain as the primary source. This is particularly true if the brand has established E-E-A-T (Experience, Expertise, Authoritativeness, and Trust). In the eyes of an AI, citing the company that actually created the news is more logical than citing a third-party aggregator that simply republished it. Building a Newsroom for the AI Era For brands to capture AI search traffic, they must pivot from being “distributors” to being “publishers.” An AI-friendly newsroom is not just a list of PDFs or dry corporate announcements. It should include: Unique Data: AI models love statistics and original research. Publishing proprietary data is one of the fastest ways to earn a citation. Expert Perspectives: Content that includes quotes and insights from identifiable experts helps satisfy the “Expertise” component of E-E-A-T, which AI models use to weight sources. Structured Data: Using Schema markup helps AI crawlers understand the context of the news, the entities involved, and the date of publication. Comprehensive Coverage: Rather than a short 400-word blast, high-performing newsrooms publish deep dives that cover a topic from multiple angles. The Impact on Digital PR and SEO Strategy The revelation that syndicated news is ignored by AI search necessitates a total overhaul of digital PR strategies. For years, the industry has been incentivized to focus on volume. Agencies would report to clients that a story was “covered” by hundreds of outlets, even if those outlets were just automated subdomains of local news stations. In an AI-first world, this metric is a vanity metric with zero ROI. From Links to Citations In traditional SEO, a link from a syndicated site might

Uncategorized

Walmart: ChatGPT checkout converted 3x worse than website

The Reality Check for Agentic Commerce For the past year, the tech world has been buzzing with the promise of “agentic commerce”—a future where artificial intelligence doesn’t just suggest products but actually handles the entire transaction for you. The vision was simple: you tell ChatGPT you need ingredients for a dinner party or a new set of power tools, and the AI handles the search, the selection, and the checkout without you ever leaving the chat interface. However, recent data from Walmart, the world’s largest retailer, suggests that we are much further from that reality than many anticipated. In a revealing disclosure, Walmart confirmed that conversion rates for purchases made directly inside ChatGPT were three times lower than when users were directed to Walmart’s own website. This massive gap in performance highlights a critical friction point in the evolution of AI-driven shopping. While AI is excellent at discovery and curation, it is currently struggling to close the deal. For marketers, SEO professionals, and e-commerce platform owners, Walmart’s experience serves as a vital case study in why the “owned environment” still reigns supreme in the digital economy. Inside the Experiment: Walmart and OpenAI’s Instant Checkout The experiment began in earnest in November, when Walmart partnered with OpenAI to pilot a feature known as “Instant Checkout.” The initiative offered roughly 200,000 products that could be purchased natively within the ChatGPT interface. The goal was to remove the friction of jumping between apps and websites, creating a seamless “conversational” shopping experience. On paper, it seemed like a win-win. OpenAI could demonstrate the utility of its ecosystem for commerce, and Walmart could reach tech-forward consumers exactly where they were spending their time. However, the results were far from the revolutionary breakthrough both companies hoped for. Daniel Danker, Walmart’s Executive Vice President of Product and Design, did not mince words when describing the outcome. He noted that the in-chat purchases converted at only one-third the rate of traditional click-out transactions. More tellingly, Danker described the native AI checkout experience as “unsatisfying.” Why Instant Checkout Failed to Convert To understand why a 3x difference in conversion exists, we have to look at the psychology of the modern shopper and the technical limitations of current LLM (Large Language Model) interfaces. 1. The Lack of Visual Richness E-commerce is a visual medium. When a user visits Walmart.com, they are greeted with high-resolution images, video demonstrations, 360-degree product views, and detailed size charts. ChatGPT, by its nature, is a text-heavy interface. While it can display images, the rich, interactive experience of a dedicated retail site is difficult to replicate in a scrolling chat window. Shoppers often need that final visual confirmation before hitting “buy,” a step that feels less certain inside a third-party AI tool. 2. The Trust and Security Gap Entering credit card information and personal shipping details into a chatbot feels fundamentally different from doing so on a brand’s official website. Despite the security protocols in place, there is a lingering “trust gap” when it comes to agentic commerce. Consumers are comfortable asking ChatGPT for a recipe or a summary of a news article, but trusting it to handle a financial transaction with a third-party retailer introduces a new layer of hesitation. 3. Missing Social Proof and Nuance Walmart’s website is optimized for conversion through social proof—reviews, ratings, and “customers also bought” suggestions. While an AI can summarize reviews, the raw data of seeing thousands of verified purchases and reading specific user feedback provides a level of reassurance that a summarized AI response lacks. If the AI says, “This is a highly-rated drill,” it carries less weight than seeing 5,000 four-star reviews on the product page itself. The Death of OpenAI’s Instant Checkout The disappointing results from the Walmart partnership have had immediate consequences for OpenAI’s product roadmap. Earlier this month, OpenAI confirmed that it is phasing out the “Instant Checkout” feature entirely. This pivot marks a significant shift in how AI labs view commerce. Rather than trying to be the “everything store” that manages transactions internally, OpenAI is moving toward a model where the AI acts as a sophisticated lead generator, handing the final transaction back to the merchant. This is a victory for the traditional web and for brand-owned platforms. It suggests that for the foreseeable future, the “buy” button belongs on the retailer’s site, not in the LLM’s sidebar. Enter Sparky: Walmart’s New Strategy for AI Integration Walmart isn’t abandoning AI; it is simply changing how it integrates with it. The company is moving away from native ChatGPT checkouts and toward an “embedded” model. This involves the deployment of “Sparky,” Walmart’s own proprietary AI shopping assistant. Instead of a generic OpenAI checkout process, Sparky will be embedded within the ChatGPT ecosystem. This new approach changes the dynamic in several key ways: Syncing the Shopping Experience One of the biggest frustrations with the previous model was the lack of continuity. In the new version, users will log into their Walmart accounts through the interface. This allows for cart syncing across platforms. If you add an item to your cart via a conversation in ChatGPT, it will appear in your Walmart app and on the Walmart website. This creates a “persistent cart” that bridges the gap between AI discovery and traditional checkout. Merchant-Handled Transactions By moving the checkout back into Walmart’s system—even if it is triggered from within ChatGPT—Walmart regains control over the user experience. They can ensure that shipping options, loyalty points (like Walmart+), and promotional offers are applied correctly. This “app-based checkout” model is what OpenAI is now favoring for all its merchant partners. Multi-Platform Presence Walmart isn’t putting all its eggs in the OpenAI basket. The company confirmed that a similar integration with Google Gemini is slated for next month. By treating AI platforms as distribution channels rather than transaction hubs, Walmart is positioning itself to be present wherever the consumer starts their search journey. What This Means for SEO and Digital Marketing The Walmart/OpenAI data is a wake-up call for the

Uncategorized

Perplexity’s Comet for iOS uses Google Search by default

The Evolution of Perplexity: From Answer Engine to Full-Scale Browser In the rapidly shifting landscape of artificial intelligence, Perplexity has carved out a unique niche as the “answer engine” of choice for power users. However, the company is no longer content with being a simple destination for queries. With the launch of Comet for iOS, Perplexity is moving directly into the territory occupied by Safari and Google Chrome. Comet is not just an application with a search bar; it is a fully realized mobile browser designed to integrate large language models (LLMs) into the fabric of the daily browsing experience. The most striking aspect of this release is the strategic partnership—or rather, the technical reliance—on its primary competitor. Perplexity has confirmed that Comet for iOS uses Google Search as its default engine. For many in the tech industry, this seems like a tactical retreat, but a closer look at the mechanics of mobile search reveals a calculated move toward pragmatism over idealism. By leveraging Google’s established infrastructure for traditional queries while overlaying its own sophisticated AI assistant, Perplexity is attempting to create a “hybrid” browsing model that offers the best of both worlds. Why Comet Defaults to Google Search The decision to set Google as the default search provider within Comet was not made lightly. Aravind Srinivas, the CEO of Perplexity, has been transparent about the reasoning behind this choice. He notes that mobile queries are fundamentally different from desktop queries. When users are on their phones, they are often looking for immediate, actionable, and location-dependent information. These are categories where traditional search engines still hold a massive advantage over generative AI. Specifically, Google excels in three key areas that current LLMs struggle to replicate with high precision: navigation, local search, and transactional intent. If a user searches for “best coffee shop near me” or “track my UPS package,” Google’s massive database and real-time indexing provide an instant, accurate result. Perplexity’s AI, while excellent at synthesizing complex information, can sometimes struggle with the latency and hyper-local accuracy required for these “right now” moments. By using Google as the backbone for these types of queries, Comet ensures that users do not experience a drop in quality when switching from Safari or Chrome. It allows the browser to remain fast and reliable for everyday tasks while saving the “heavy lifting” of AI processing for queries that actually require intelligence and synthesis. The Hybrid Search Experience: How Comet Works Comet is designed to bridge the gap between the “old” web and the “new” AI-driven web. The interface provides traditional search engine results pages (SERPs) for fast, high-intent queries. If you search for a stock price or a weather forecast, Comet serves those results via Google’s engine. However, the Perplexity Assistant is always present, ready to layer advanced intelligence over the standard web experience. This hybrid approach addresses one of the biggest friction points in AI search: speed. Generative AI models take time to process and output text. For a user who just wants to find a website’s login page, waiting five seconds for an AI to write a paragraph is an annoyance. Comet solves this by defaulting to the “fast” path for simple lookups and offering the “deep” path for research and complex questions. The Role of the Perplexity Assistant Within the Comet environment, the Perplexity Assistant acts as a digital companion that lives inside the browser. It isn’t just a chatbot tucked away in a menu; it is integrated into the browsing flow. Users can summon the assistant to interact with the page they are currently viewing. For example, if you are reading a long-form investigative article, you can ask the assistant to summarize the key points or explain a specific concept mentioned in the third paragraph. The assistant can also take actions on your behalf. Perplexity has touted the browser’s ability to help with form fills, draft emails based on page content, and even assist with bookings. This moves the browser from a passive viewing tool to an active productivity agent, aligning with the broader industry trend of “AI agents” that can execute tasks rather than just provide information. Key Features of Comet for iOS Comet arrives with a suite of features that differentiate it from standard mobile browsers. These features are built on the premise that a browser should be more than a window to the web; it should be an intelligence tool. Voice-Enabled Browsing On mobile, typing can be a hurdle. Comet emphasizes voice interaction, allowing users to ask complex questions while they browse. This isn’t just basic voice-to-text; the system is designed to understand context. You can ask follow-up questions about a site you are currently visiting without having to re-specify the subject, making the experience feel more like a conversation and less like a series of disjointed searches. Deep Research and Cited Summaries One of Perplexity’s flagship features is “Deep Research,” which has been ported over to the Comet browser. When a user initiates a research task, the AI doesn’t just look at one source. It crawls multiple tabs, analyzes various perspectives, and generates a comprehensive summary with citations. This is particularly useful for students, professionals, and researchers who need to get up to speed on a topic quickly without manually clicking through twenty different Google results. Cross-Tab Synthesis One of the most innovative features of Comet is its ability to research across tabs. Traditional browsers treat tabs as silos—information in tab A has no relationship to information in tab B. Comet’s assistant can look across your open tabs to find connections, summarize common themes, or help you compare products across different retail sites. This is a significant leap forward in mobile productivity. SEO Implications: A New Era for Digital Marketers The launch of Comet and its reliance on Google Search creates a complex new environment for SEO professionals and digital marketers. For years, the industry has speculated that AI search would kill traditional SEO. However, Perplexity’s decision to use Google as a default suggests that

Uncategorized

Microsoft Advertising simplifies automated bidding setup

The Evolution of Bidding in Microsoft Advertising The digital advertising landscape is undergoing a significant transformation, driven largely by the rapid advancement of machine learning and artificial intelligence. Microsoft Advertising, a key player in the search and native advertising space, is staying at the forefront of this evolution by refining its platform to be more intuitive and efficient. Recently, Microsoft announced a strategic shift in how advertisers configure automated bidding, moving away from fragmented settings toward a more consolidated, goal-oriented framework. This update is not merely a cosmetic change to the user interface. It represents a fundamental philosophy in modern digital marketing: reducing manual complexity so that advertisers can focus on high-level strategy while the platform’s algorithms handle the granular execution. By folding familiar targets like Target CPA (Cost Per Acquisition) and Target ROAS (Return on Ad Spend) into broader automated strategies, Microsoft is streamlining the campaign creation process without sacrificing the power of its optimization engines. Simplifying the Automated Bidding Experience Historically, advertisers on Microsoft Advertising—and indeed many other platforms—faced an array of bidding options that could often feel redundant or confusing. You might have had to choose between “Maximize Conversions” and “Target CPA” as if they were entirely different animals. In reality, these strategies share a common goal: driving as many conversions as possible within specific parameters. Under the new simplified setup, Microsoft is consolidating these options into two core pillars based on the advertiser’s primary objective: 1. Maximize Conversions For advertisers whose primary goal is volume—generating the highest number of leads, sign-ups, or sales within a given budget—the “Maximize Conversions” strategy is the foundation. However, Microsoft recognizes that volume often needs a safety net. Therefore, Target CPA (tCPA) is now an optional layer within the Maximize Conversions framework. Instead of selecting tCPA as a standalone strategy, you simply choose Maximize Conversions and, if desired, input your target cost per acquisition. 2. Maximize Conversion Value For e-commerce businesses or service providers where not all conversions are equal, “Maximize Conversion Value” is the go-to approach. This strategy focuses on the total revenue or “value” generated by the campaign rather than just the raw count of conversions. Just as with the conversion-focused model, Target ROAS (tROAS) has been integrated as an optional setting. Advertisers can now select Maximize Conversion Value and define a specific return on ad spend goal within that selection. The Technical Logic: What Has (and Hasn’t) Changed? A common concern among seasoned PPC (Pay-Per-Click) managers when platforms “simplify” things is whether they are losing control or if the underlying algorithm is being altered. Microsoft has been clear on this front: the underlying bidding behavior remains exactly the same. The mathematical models, the data signals used (such as device, location, time of day, and intent), and the way the system bids in real-time auctions have not changed. The update is strictly focused on the configuration experience. By grouping these settings, Microsoft is ensuring that advertisers are thinking about their goals in a more structured way. If your goal is conversions, you start with the conversion strategy. If you have a specific price point you need to hit to remain profitable, you add the tCPA target. This hierarchy makes logical sense and aligns with how modern AI-driven bidding works best—by giving the machine a clear objective and a boundary to work within. Why Microsoft Is Making This Move Now The move toward simplification is part of a broader industry trend toward “standardization.” Google Ads made similar changes to its bidding structure several years ago, and by aligning its interface with industry standards, Microsoft makes it significantly easier for multi-platform advertisers to manage their campaigns. Here are several reasons why this shift is beneficial for the ecosystem: Reducing the Barrier to Entry For small business owners or new digital marketers, the sheer number of bidding options in a modern ad platform can be overwhelming. “Should I use Target CPA or Maximize Conversions?” is a common question that often leads to analysis paralysis. By presenting two clear paths—Conversions or Value—Microsoft lowers the barrier to entry, allowing users to get campaigns up and running faster and with more confidence. Consistency Across Accounts For agencies managing dozens or hundreds of accounts, consistency is key to efficiency. This update ensures that the setup process is uniform across all campaigns. It reduces the likelihood of human error where one campaign might be set to a legacy standalone tCPA setting while another is using a newer automated strategy, leading to fragmented reporting and optimization workflows. Focus on Machine Learning Efficiency Automated bidding thrives on data. By consolidating these strategies, Microsoft can potentially gather and process performance data more effectively across its network. When the system knows that a “Maximize Conversions” campaign with a target is fundamentally trying to achieve the same thing as one without a target (just with more constraints), it can apply its learnings more broadly, leading to faster “learning phases” for new campaigns. Practical Implications for Advertisers If you are currently managing Microsoft Advertising campaigns, you might be wondering how this affects your daily routine. The good news is that the transition is designed to be seamless. No Disruption to Existing Campaigns Microsoft has confirmed that any existing campaigns currently using Target CPA or Target ROAS as standalone settings will continue to run without interruption. You do not need to go in and manually update your current campaigns. They will maintain their performance goals and bidding logic. However, when you go to create a new campaign, you will see the new streamlined interface. Portfolio Bid Strategies Remain Intact For advanced advertisers who use Portfolio Bid Strategies to manage multiple campaigns under a single bidding goal, there is no change. These remain a powerful way to aggregate data across different campaign structures to fuel the bidding algorithm, and Microsoft is keeping this functionality as it is. Optionality Provides Continued Control It is important to emphasize that while the setup is simpler, the control is still there. Setting an optional Target CPA or Target

Scroll to Top