Author name: aftabkhannewemail@gmail.com

Uncategorized

How AI-driven shopping discovery changes product page optimization

How AI-driven shopping discovery changes product page optimization The e-commerce landscape is undergoing a fundamental transformation. For years, Search Engine Optimization (SEO) for online retailers was a game of keywords, backlink profiles, and technical site architecture. While these elements remain important, the rise of the Agentic Web and AI-driven shopping assistants is shifting the focus toward a new frontier: conversational discovery. Consumers are no longer just typing “best running shoes” into a search bar; they are engaging in complex, multi-turn dialogues with AI agents to solve specific life problems. As consumer behavior leans further into AI-powered tools—ranging from ChatGPT’s shopping research capabilities to specialized Agentic Commerce Protocols (ACP)—the industry has largely focused on the technical infrastructure. However, many brands are missing the most critical shift. Conversational search is changing how visibility is earned on the digital shelf. It is no longer enough to rank for a high-volume term; your product must now survive a rigorous filtering process conducted by an AI that understands deep context and user-specific constraints. There is a prevailing myth that massive, legacy brands will inevitably dominate the AI search era due to their sheer data volume. This is not necessarily the case. When we move beyond generic shorthand and look at the highly specific, nuanced context that users provide to AI, the playing field levels. AI is a matching engine designed to connect specific needs with precise solutions. If a niche brand provides the specific “ground truth” data an AI needs to answer a user’s complex query, that brand can leapfrog a household name that relies on vague marketing copy. This article explores how conversational search is redefining product discovery and details the necessary updates for product detail pages (PDPs) to remain visible and recommended in an AI-first world. How conversational search builds on semantic search To understand how to optimize for AI, we must first distinguish between semantic search and conversational search. While the two are often conflated, they serve different roles in the discovery process. Semantic search is the foundation; it focuses on understanding the meaning and intent behind words. It recognizes that “water-resistant” and “hydrophobic” are related concepts, even if the specific keywords don’t match. Conversational search, however, is the ability to maintain a back-and-forth dialogue over time, retaining memory of previous interactions. If semantic search is the engine that understands the query, conversational search is the logic that understands the journey. To illustrate this, consider a restaurant analogy: If semantic search is a chef who knows exactly what you mean when you ask for “something light,” conversational search is the waiter who remembers that you are ordering for a dinner party and that you previously mentioned a peanut allergy. AI blends these two capabilities. It uses semantic understanding to decode complex, multi-layered intent and conversational logic to keep the thread of a user’s story moving forward. For e-commerce brands, this means content must be two things: clear enough for the “chef” (the semantic engine) to interpret and consistent enough for the “waiter” (the conversational thread) to follow. If your product page lacks the specific details required to answer a follow-up question—such as “Does it come in a version that fits a smaller kitchen?”—you will be dropped from the conversation before the transaction occurs. What conversational search and AI discovery mean for ecommerce The shift toward conversational discovery is best seen in how users are beginning to treat AI as a personal consultant. Consider the real-world example of a consumer using ChatGPT to remodel a kitchen. This user didn’t start with a traditional search for “the best cabinets.” Instead, they utilized the AI as a pseudo-designer and contractor. The AI was tasked with solving specific problems, and product discovery happened naturally as a byproduct of those solutions. In this scenario, discovery is driven by constraint-based queries. The user might ask, “Find cabinets that fit these specific dimensions and match this particular oak wood type,” or “Are these cabinets easy for a DIY installation by someone with minimal tools?” The conversation piles up, allowing the user to narrow down multiple solutions simultaneously. When the AI eventually recommends a product that satisfies all the design, size, and difficulty constraints, the user simply asks, “Where can I buy those?” For brands, the lesson is clear: stop optimizing solely for keywords and start optimizing for tasks. You must identify the specific conversations where your product becomes the inevitable solution. According to the Tinuiti 2026 AI Trends Study, “Recommend products” is the top task users trust AI to handle. This highlights a massive opportunity. If your PDP data cannot answer questions like “Will this fit?” or “Is this easy to maintain?” you will not be part of the AI’s final recommendation set. Your product pages must provide the “ground truth” details—the unvarnished, factual specifications—that these assistants need to make a confident selection on behalf of the user. What to do before you start changing every PDP Before rushing to rewrite every product description, e-commerce teams must change their approach to research. Traditional keyword research tools provide “prompt volumes,” but in an AI-driven environment, intent is far more valuable than volume. You need to understand the high-intent journeys your customers are actually taking. This requires a multi-step audit process to identify high-intent semantic opportunities. Audit your personas and non-negotiables Who is your buyer, and what are their deal-breakers? A “deal-breaker” in conversational search might be a specific material, a compatibility requirement, or a lifestyle constraint. If you haven’t mapped these recently, your PDPs are likely missing the very data points that AI agents use to filter results. Bridge the internal team gap Your SEO team needs to talk to your product and sales departments. These teams are on the front lines and know the specific attributes that drive a sale or lead to a return. They understand the “edge cases” and the “will it work with X?” questions that customers ask every day. This tribal knowledge is exactly what needs to be digitized and placed on the PDP

Uncategorized

OpenAI details how ads will work in ChatGPT

The landscape of digital advertising is on the brink of its most significant transformation since the invention of the search engine. OpenAI, the organization that triggered the current artificial intelligence revolution, has finally pulled back the curtain on how it intends to monetize its flagship product, ChatGPT, through advertising. In a revealing episode of the OpenAI podcast, host Andrew Maine sat down with OpenAI executive Assad Awan to provide a comprehensive look at the roadmap for ads within the platform. For months, speculation has swirled regarding how OpenAI would balance its massive operational costs with its commitment to user experience. The details shared by Awan suggest a strategy that prioritizes “trust-first” design, aiming to integrate commercial content without compromising the integrity of the AI’s responses. As ChatGPT transitions from a pure productivity tool into a discovery engine, the introduction of ads represents a pivotal moment for marketers, developers, and the millions of users who rely on the service daily. Who Will See Ads in ChatGPT? One of the most critical questions for any platform introducing a dual-monetization model is who will be affected by the change. OpenAI has established a clear hierarchy to ensure that paying subscribers continue to receive a premium, uninterrupted experience. According to Awan, the rollout of advertisements will be targeted specifically at the non-paying segments of the user base. The primary groups that will encounter ads include users on the “Free” tier and the “Go” tier. By introducing ads to these segments, OpenAI can continue to offer high-level AI capabilities to the general public for free, subsidized by advertising revenue. This follows a well-established “freemium” pattern seen in platforms like Spotify or YouTube, where the barrier to entry is eliminated in exchange for exposure to brand messaging. Conversely, those who contribute directly to the platform’s sustainability through subscriptions will remain shielded. This includes subscribers to ChatGPT Plus and ChatGPT Pro. Most notably, OpenAI is taking a firm stance on corporate environments: Enterprise workspaces will remain entirely ad-free. This distinction is vital for businesses that require a distraction-free environment and have concerns about the professional aesthetic of the tools their employees use. Establishing Guardrails: The Trust-First Framework The introduction of ads into a conversational AI environment presents unique challenges that don’t exist in traditional display or search advertising. In a conversation with an AI, the line between an objective recommendation and a paid promotion can easily become blurred. To combat this, OpenAI has outlined a series of strict guardrails designed to maintain the “sanctity” of the user-AI interaction. Technical and Visual Separation Awan emphasized that ads will be both visually and technically separate from the model’s generated answers. This means that when a user receives a response from ChatGPT, the “organic” answer produced by the LLM (Large Language Model) will not be “poisoned” by advertising instructions. The ads will likely appear as distinct modules or labeled components within the interface, ensuring the user can clearly distinguish between what the AI “thinks” and what a sponsor is saying. The Privacy Promise Privacy remains the most significant hurdle for AI adoption. Many users share personal or proprietary information with ChatGPT to solve complex problems. OpenAI has committed to a policy where conversations are not shared with advertisers. Unlike traditional social media models that might sell data packets or provide deep behavioral insights to third parties, OpenAI aims to maintain a wall between the user’s private dialogue and the advertiser’s targeting engine. Exclusion of Sensitive Topics To avoid ethical pitfalls and maintain a high standard of brand safety, OpenAI will not display ads during conversations involving sensitive topics. This includes discussions regarding health, medical advice, and politics. By removing the commercial element from these high-stakes categories, OpenAI protects users from potentially predatory advertising and avoids the controversy of appearing to “endorse” specific political or medical viewpoints through paid placements. The Technical Architecture: A Model That Doesn’t Know It’s Selling Perhaps the most fascinating technical detail shared by Awan is that the underlying AI model itself is unaware of the advertisements. In a traditional search engine, the algorithm is often designed to blend organic and paid results to maximize click-through rates. OpenAI is taking a different approach. The model does not have access to the ad inventory while it is generating a response. It cannot reference a specific ad unless the user explicitly asks about it. For example, if you ask ChatGPT for a recipe for chocolate cake, the AI generates the best recipe it can find. An ad for a specific brand of cocoa powder might appear alongside that recipe, but the AI won’t say, “You should use Brand X cocoa powder because they are a partner.” This separation ensures the AI’s utility remains objective and data-driven, rather than sales-driven. Furthermore, OpenAI is providing users with robust controls. Users will have the ability to adjust or entirely turn off personalization features that inform which ads they see. And, of course, the ultimate “control” remains the option to upgrade to a paid tier to remove advertisements entirely. Prioritizing Trust Over Revenue In a surprising display of corporate philosophy, Awan revealed OpenAI’s internal prioritization framework. When making decisions about the ad platform, the company ranks its priorities in the following order: User Trust User Value Advertiser Value Revenue By placing revenue at the bottom of the list and user trust at the top, OpenAI is signaling a long-term strategy focused on platform longevity rather than short-term gains. This hierarchy suggests that if an ad format is highly profitable but erodes user trust, it will be discarded. This approach is intended to prevent the “enshittification” of the platform—a term often used to describe digital services that degrade their user experience to squeeze out more ad dollars. The Future for Small Businesses: AI as an Advertising Agent One of the most forward-thinking aspects of OpenAI’s vision involves how businesses will interact with the ad platform. Currently, running digital ad campaigns requires navigating complex dashboards, understanding keyword bidding, and managing sophisticated tracking pixels. For

Uncategorized

Google Shows How To Check Passage Indexing via @sejournal, @martinibuster

Understanding the Nuances of Google Passage Indexing The landscape of search engine optimization is constantly shifting, moving away from simple keyword matching toward a more sophisticated understanding of intent and context. One of the most significant developments in recent years has been the introduction of what Google initially called “Passage Indexing,” which is more accurately described as passage ranking. This technology allows Google to identify and surface specific sections of a webpage in response to a query, even if the page as a whole covers a broader or slightly different topic. For SEO professionals and content creators, the ability to confirm whether these specific passages are being recognized by Google is crucial. Recently, Google’s Search Advocate, John Mueller, provided clarity on how webmasters can verify if a passage is indexed and ready to perform in the search results. Understanding this process is vital for troubleshooting visibility issues and optimizing long-form content for “needle-in-a-haystack” queries. What is Passage Indexing? Before diving into the methods for checking indexing, it is essential to define what passage indexing—or ranking—actually is. In the past, Google’s systems would evaluate the relevance of an entire page to determine its ranking for a specific query. While the overall theme of the page still matters, passage ranking utilizes deep learning to better understand individual sections of a page. This means that if a user asks a very specific question, Google can find a single paragraph buried deep within a 5,000-word article and rank that page for the query. The page might not be entirely dedicated to that specific answer, but the passage within it is the perfect match for the user’s intent. It is important to note that Google still indexes the entire page; the “passage” aspect refers to how the engine processes and ranks the content internally. The Difference Between Indexing and Ranking In the context of John Mueller’s advice, there is a subtle but important distinction between a page being indexed and a passage being ranked. Indexing means that Google has crawled the URL and added it to its massive database of the web. Ranking refers to where that page (or a specific part of it) appears in the search engine results pages (SERPs) for a given query. When we talk about checking passage indexing, we are essentially verifying that Google’s systems have processed the text deeply enough to recognize it as a distinct piece of information. If Google hasn’t indexed the page correctly, or if it hasn’t parsed the text properly, the passage won’t have the opportunity to rank. How to Check if a Passage is Indexed: The Mueller Method John Mueller offered a straightforward solution for those wondering if their specific content sections are being recognized. The method involves using search operators to see if Google can find the exact string of text you are targeting. While it may seem simple, it is the most effective way to confirm that Google’s index includes the specific wording of your passage. Step 1: The Exact Match Search The most direct way to check for passage indexing is to take a unique sentence or a specific string of text from your content and search for it in quotes. For example, if you have written a unique explanation of a technical process, copy a 10- to 15-word string from that section. By placing the text in quotation marks (e.g., “this is my unique explanation of a technical process”), you are telling Google to look for that exact sequence of words. If the search results return your page, it confirms that Google has crawled and indexed that specific text. If the search returns no results, it suggests that either the page hasn’t been indexed or the text has been filtered out for some reason. Step 2: Using the Site: Operator To narrow the search down further and ensure you aren’t seeing results from scrapers or other sites, you can combine the quoted text with the site: operator. The syntax would look like this: site:yourdomain.com “specific passage text” This command tells Google to search only within your specific domain for that exact string. If the result appears, you have definitive proof that Google’s index contains that passage from your site. This is particularly helpful for large websites where you want to ensure a specific update or a new section of a long article has been picked up by the latest crawl. Why Your Passage Might Not Show Up If you perform an exact match search and your content does not appear, several factors could be at play. Understanding these hurdles is the first step in troubleshooting your content’s performance. 1. The Content is Too New The most common reason for a passage not appearing is simply time. Even with Google’s high-speed crawling, it can take days or even weeks for a new page or a significant update to be fully processed and integrated into the ranking systems. If you have recently added the passage, wait for Google to recrawl the URL before testing it again. 2. Canonicalization Issues If Google views your content as a duplicate of another page—either on your site or elsewhere—it may choose not to index the specific passage from your URL. Google prefers to keep its index lean and efficient, so it often filters out redundant information. Ensure that your content is original and that your canonical tags are correctly pointing to the version of the page you want to rank. 3. Rendering and JavaScript Challenges If your passage is delivered via JavaScript or is hidden behind “read more” buttons that require user interaction, Google might struggle to see it. While Googlebot has become very proficient at rendering JavaScript, it is not always perfect. If the text isn’t in the initial HTML source code, there is a risk that it won’t be indexed as effectively as static text. 4. Low Quality or Thin Content Google’s “helpful content” systems evaluate the overall quality of a page. If a page is deemed low quality, thin, or purely designed

Uncategorized

Google Ads shows recommended experiments

The landscape of digital advertising is shifting from manual management to AI-driven oversight. In a move that further streamlines the path toward account optimization, Google Ads is rolling out a new feature: recommended experiments. This update, recently spotted in the wild by industry experts like Hana Kobzová of PPC News Feed, marks a significant change in how advertisers approach A/B testing and performance scaling. For years, the Experiments page within Google Ads has been a cornerstone for sophisticated marketers who refuse to make changes based on gut feeling alone. However, setting up a proper experiment has historically been a manual, sometimes tedious process. With this latest rollout, Google is utilizing its internal performance data and account-specific signals to surface pre-designed test ideas directly within the dashboard. This not only saves time but also pushes advertisers toward adopting newer, often AI-centric, features that they might otherwise overlook. Understanding the Recommended Experiments Framework The core of this update lies in the integration of proactive suggestions within the Experiments tab. Previously, if a digital marketer wanted to test a new bidding strategy—moving from Manual CPC to Target ROAS, for example—they had to manually create a campaign trial, determine the traffic split, and set specific end dates. Now, Google Ads analyzes the account’s current setup and identifies gaps where a test might yield a performance lift. These recommendations are not generic advice. Instead, they are tailored to the specific data available in the account. If a campaign is seeing high conversion volume but stagnant ROI, Google might suggest a Smart Bidding experiment. If a Search campaign is missing out on relevant traffic, the system might recommend testing Broad Match combined with Smart Bidding. The implementation is designed to be frictionless. When an advertiser navigates to the Experiments page, these suggestions appear alongside the traditional “Create Experiment” workflow. Each recommendation comes with a preconfigured setup, meaning the traffic split, trial duration, and success metrics are already filled out based on Google’s best practices. The Mechanics: How It Works for Advertisers When you encounter a recommended experiment, Google provides a streamlined path to deployment. The process generally follows a three-step logic that emphasizes speed and ease of use: 1. Automated Identification Google’s algorithms scan your active campaigns to look for optimization opportunities. These aren’t just based on what is “missing,” but on what the data suggests could perform better under a different configuration. For instance, the system might notice that your Performance Max campaigns could benefit from a test regarding creative variations or URL expansion. 2. Preconfigured Setup One of the biggest hurdles to frequent testing is the setup time. Recommended experiments remove this barrier. Each suggestion includes a draft version of the experiment with all the technical details—such as the cookie-based or query-based split—already handled. Advertisers can see exactly what the “Trial” arm of the experiment will look like compared to the “Control” arm. 3. Flexible Implementation While Google provides a “one-click” style experience for these experiments, they haven’t removed the ability to customize. Advertisers have the option to launch the experiment immediately or enter the settings to tweak the budget split, change the duration, or adjust the specific variables being tested. This hybrid approach caters to both the time-strapped small business owner and the meticulous agency professional. Specific Examples: Final URL Expansion and Beyond One of the specific prompts observed in this update involves Final URL expansion. In many Performance Max and Search campaigns, advertisers have the option to let Google’s AI choose the most relevant landing page on their website based on the user’s search query. Many advertisers are hesitant to enable this, fearing a loss of control over where traffic is sent. By surfacing this as a “recommended experiment,” Google allows advertisers to test the impact of Final URL expansion in a controlled environment. Instead of turning the feature on for the entire campaign and hoping for the best, the advertiser can run a split test. One half of the traffic goes to the manually selected landing pages, while the other half utilizes the automated expansion. The experiment then provides a clear data set showing which approach resulted in a lower Cost Per Acquisition (CPA) or higher Return on Ad Spend (ROAS). Other likely recommendations include: Bidding Strategy Shifts: Moving from Maximize Conversions to Target CPA to find a more efficient scale. Keyword Match Type Tests: Transitioning from Phrase Match to Broad Match in a brand or generic campaign to capture more volume while relying on Smart Bidding for intent filtering. Creative Testing: Testing different headlines or image assets within Responsive Search Ads or Demand Gen campaigns. The Strategic Importance of Lowering the Barrier to Entry In the world of PPC (Pay-Per-Click), the “test and learn” philosophy is often preached but not always practiced. The reason is usually a lack of resources. Smaller teams often don’t have the hours required to design, monitor, and conclude experiments every week. By embedding these suggestions into the workflow, Google is effectively lowering the barrier to entry for high-level account optimization. This is a significant win for account health. Frequent experimentation prevents account stagnation. It allows advertisers to discover new pockets of profitability without risking their entire budget on an unproven change. By making the “test” the default path for change, rather than a total “switch,” Google is encouraging a more scientific approach to account management. The “Big Picture”: Automation and the Future of Google Ads The introduction of recommended experiments is not an isolated update; it is part of a much larger trend. Google is increasingly moving toward a “guided” experience where the platform acts as a co-pilot for the advertiser. We have seen this with the Recommendations tab, the “Apply All” features for optimizations, and the heavy push toward Performance Max. The goal is to move the human advertiser away from the “buttons and levers”—the manual tasks like bid adjustments and keyword pruning—and toward high-level strategy and creative direction. By automating the technical side of experimentation, Google allows marketers to focus on whether

Uncategorized

Google Ads simplifies product campaign tracking

The complexity of managing a modern e-commerce presence on Google Ads cannot be overstated. For digital marketers and business owners managing thousands of SKUs, the bridge between the Google Merchant Center and active ad campaigns has often felt like a “black box.” Advertisers have long struggled to pinpoint exactly why certain products aren’t serving or which specific campaigns are bidding on a single item. In a move to provide much-needed clarity, Google Ads has officially rolled out a simplified product campaign tracking feature, streamlining the way advertisers monitor product-level eligibility. This update, recently highlighted by industry experts including Hana Kobzová of PPC News Feed, introduces a dedicated dashboard within the “Products” section of the Google Ads interface. By centralizing visibility, Google is addressing one of the most persistent pain points in retail advertising: the lack of granular, product-to-campaign mapping. Understanding the New Product Eligibility Dashboard The core of this update is found within the Products section of the Google Ads account. Previously, if an advertiser wanted to know if a specific pair of running shoes was active in a Performance Max campaign or a Standard Shopping campaign, they would often have to navigate through multiple layers of campaign settings, ad groups, and product groups. This manual cross-referencing was not only time-consuming but also prone to human error. The new dashboard changes the workflow by putting the product at the center of the data. Instead of looking at a campaign to see which products it contains, you can now look at a product to see which campaigns it is eligible for. The interface includes several key components designed to speed up troubleshooting and optimization. Comprehensive Product Details Table At the heart of the new view is an enhanced table that lists individual products along with their essential metadata. This isn’t just a list of names; it includes the current status of the product, any technical issues identified by the Merchant Center (such as missing GTINs or policy violations), and priority flags. This high-level view allows managers to see at a glance if a top-selling product is hampered by technical debt before they even look at bidding strategies. Visualizing Trends with Line Graphs Data is often easier to digest when visualized. The update includes a line graph that summarizes campaign status trends over time. This is particularly useful for identifying sudden drops in eligibility. For example, if a large segment of your inventory suddenly becomes “Not Eligible” due to a feed processing error or a campaign budget exhaustion, the graph will show a clear dip, allowing for immediate intervention. Advanced Filters for Segmented Views For accounts with massive inventories, scrolling through a table is inefficient. Google has integrated robust filters that allow advertisers to segment their views based on eligibility. You can now filter for products that are eligible for all campaigns, products that are partially eligible, or products that are not running in any campaigns at all. This functionality is critical for large-scale retailers who need to prioritize their time on “ghost products”—items that should be selling but aren’t currently participating in the auction. The Pop-Up Panel: A Game Changer for Granular Control Perhaps the most significant addition to this interface is the new pop-up panel. When an advertiser clicks on a specific product, a panel emerges listing every campaign where that product is either “Eligible” or “Not eligible.” This side-by-side comparison eliminates the guesswork. If a product is supposed to be in a high-priority “Clearance” Shopping campaign but shows up as “Not eligible” in the panel, the advertiser can immediately investigate whether it’s a filtering issue within the campaign or a bid adjustment problem. This level of transparency is a significant departure from the older, more opaque product management systems. Solving the Performance Max vs. Shopping Conflict One of the primary reasons this update is being hailed by the PPC community is its ability to diagnose campaign overlap. Since the introduction of Performance Max (PMax), many advertisers have run “hybrid” accounts containing both PMax and Standard Shopping campaigns. However, Google’s internal auction logic usually prioritizes PMax over Standard Shopping if the same product is eligible for both. Before this update, identifying which campaign was actually “claiming” a product was a forensic exercise. Now, by using the product-level eligibility view, advertisers can quickly see if a product is unintentionally overlapping across different campaign types. This visibility is essential for preventing budget cannibalization and ensuring that your bidding strategies are not competing against one another. Optimizing Budget Allocation When you can see exactly where a product is eligible to run, you can make smarter decisions about budget allocation. If you find that your most profitable products are eligible for five different campaigns, you might be spreading your budget too thin, leading to sub-optimal performance across the board. Conversely, if a high-margin item is only eligible for a single, low-traffic campaign, you now have the data needed to expand its reach into more aggressive PMax or Search-heavy campaigns. The Big Picture: Why This Update Matters for Profitability In the world of e-commerce, profitability is often found in the margins of product-level management. Broad campaign-level metrics can often mask underlying issues where individual high-value items are underperforming or not serving at all. Google Ads’ move toward simplifying product tracking is a direct response to the need for more granular control in an increasingly automated advertising landscape. By providing these tools, Google is helping advertisers: Reduce Troubleshooting Time: What used to take thirty minutes of clicking through menus can now be done in seconds via the eligibility panel. Improve Feed Health: By seeing issues directly alongside campaign eligibility, advertisers are reminded to fix Merchant Center errors that have a direct impact on ad delivery. Prevent Revenue Leaks: Identifying “dark” products—those that are active but not assigned to any campaign—ensures that no part of the inventory is left behind. Better Align Strategy with Reality: Advertisers can verify that their organizational structure (e.g., segmenting by brand or category) is actually being reflected in which products are

Uncategorized

What 4 AI search experiments reveal about attribution and buying decisions

The Shift from Tracking Clicks to Tracking Conversations For decades, search engine optimization has lived and died by the click. We tracked rankings, monitored click-through rates (CTR), and obsessed over the data inside Google Search Console. But the landscape of digital discovery has undergone a seismic shift. Today, AI search influence is no longer a theoretical concept—it is actively showing up in sales calls and CRM notes before it ever appears in a traditional SEO report. A recent lead for a major agency stated, “Found you via Grok, actually.” This single comment highlights a massive disconnect in the industry. The agency wasn’t actively trying to rank in Grok, Elon Musk’s AI on X. They weren’t using specialized AI prompt tracking tools to monitor it. Yet, the AI was already acting as a brand ambassador, influencing how a high-value buyer discovered and evaluated their services. This disconnect is at the heart of the modern SEO’s dilemma. Clients and stakeholders want visibility in ChatGPT, Perplexity, Gemini, and Google AI Overviews, yet they are hesitant to invest in a channel that doesn’t show up cleanly in traditional attribution models. To bridge this gap, a series of controlled experiments were conducted across diverse digital assets—including agency sites, personal brands, e-commerce stores, and test domains—to understand how AI search actually moves the needle on commercial outcomes. Experiment 1: The Self-Promotional “Best Of” Strategy One of the most debated tactics in the evolving world of Generative Engine Optimization (GEO) is the use of “best of” listicles published on a brand’s own website. The logic is simple: create a list of the top providers in your niche, place your own brand at the number one spot, and wait for Large Language Models (LLMs) to scrape and summarize that data. While this might seem like a transparent marketing ploy to a human reader, LLMs function differently. They prioritize patterns and common citations. A significant study by Ahrefs recently analyzed ChatGPT responses across hundreds of “best” style queries and found that list-based posts were the most frequently cited page type. Crucial factors for these citations included the format of the list and the freshness of the content. To test this, a list titled “Best SEO Agencies in Sydney” was published on a personal brand website, LawrenceHitches.com. The author’s own site was included in the ranking. Within a mere two weeks, the site began appearing across various AI search tools for related queries. The speed of this movement was particularly noteworthy; traditional organic rankings in Google rarely fluctuate that quickly for competitive commercial terms. This experiment proved that LLMs are currently susceptible to surface-level influence from structured, recently updated list content. Experiment 2: Testing the Credibility of Fake Entities Critics of the first experiment might argue that the results were skewed by the existing authority of the personal brand or its associations with established industry names. To eliminate this variable, a second experiment was launched using a fake business. A basic landscaping website was built solely for the purpose of SEO and AI testing, with no prior reputation or digital footprint. The team published a similar “Best Landscapers in Melbourne” list on this brand-new domain. Mirroring the results of the first test, the site appeared in AI search responses within two weeks. This confirmed a critical reality: if a brand-new, unverified test site can surface in AI results this quickly, then “visibility” in an AI prompt is not necessarily a proxy for “trust” or “authority.” This creates a significant conflict for modern brands. On one hand, data suggests that “Best X” pages attract AI citations. On the other hand, listing yourself as the top provider on your own website can damage buyer trust if the bias is too obvious. Industry leaders have noted that while founders may clamor for the “secret sauce” to appear in ChatGPT, a strategy built purely on self-promotion without third-party validation is unlikely to be sustainable in the long term as AI models become more sophisticated at identifying brand bias. The Attribution Crisis: Why Prompt Tracking is Often Misleading As brands scramble to understand their AI visibility, a new market of prompt tracking tools has emerged. However, relying on these tools as a primary success metric is dangerous. Research comparing tracking APIs with actual scraped user experiences has shown that brand overlap can be as low as 24%. This means that 75% of the time, the data an API provides might not match what a real user sees in their specific ChatGPT or Gemini session. Because AI responses are non-deterministic and highly personalized based on user history and context, a screenshot of a brand mention is a “vanity metric” in its purest form. Instead of asking, “Did we show up in the prompt?”, sophisticated marketers are now asking, “Did this change how the buyer behaved?” This requires shifting focus toward qualitative signals from the sales floor, such as: Are leads mentioning specific AI tools during the initial discovery call? Is the sales team spending less time on basic education and more on specific solution tailoring? Has the overall speed of the buying cycle increased? Is there a noticeable softening in price resistance from leads coming through these channels? Experiment 3: E-commerce, Digital PR, and the Messy Middle Kadi, an e-commerce brand specializing in luggage, served as the third experimental subject. This test aimed to see if high-authority Digital PR and off-site mentions could drive AI visibility more effectively than on-site technical SEO. The team executed a series of creative data campaigns, including travel studies on “over-touristed destinations” and “airport cybersecurity guides,” as well as product placements in “best suitcase” round-ups. The results were telling. While the digital PR efforts led to authority growth and temporary keyword spikes, they weren’t a “silver bullet.” The real insight came during a Black Friday sale. A customer found Kadi through a ChatGPT query regarding “kids carry-on” luggage. The buyer journey was complex: they used AI to find the brand, but then visited the site to check shipping policies, browsed

Uncategorized

How to reduce low-quality leads from Performance Max campaigns

The Performance Max Paradox: High Volume, Low Quality Google’s Performance Max (PMax) has fundamentally changed the landscape of digital advertising. By leveraging machine learning and artificial intelligence, it offers advertisers a streamlined way to access Google’s entire inventory—Search, YouTube, Display, Discover, Gmail, and Maps—from a single campaign. For e-commerce brands, PMax has often been a revelation, driving massive scale and impressive Return on Ad Spend (ROAS). However, for lead generation marketers, the experience has been significantly more volatile. When left to its own devices, Performance Max is exceptionally good at two things: driving incredible lead volume and finding the lowest-quality leads imaginable. This creates a “black box” problem where the algorithm optimizes for the path of least resistance. Since a bot or a non-intent user is “cheaper” to convert than a high-value decision-maker, the AI naturally drifts toward the former to meet its mathematical goals. Many brands are lured in by the promise of low Cost Per Acquisition (CPA) numbers, only to realize weeks later that their CRM is filled with junk, spam, and “leads” that have no intention of ever making a purchase. To make PMax work for lead gen in 2026, you cannot simply “set it and forget it.” You must build rigorous guardrails to wrestle the algorithm into submission and force it to prioritize quality over sheer quantity. Shifting Focus to High-Value Conversion Goals The most common mistake in PMax lead generation is setting the conversion goal to a simple form fill. When you tell Google that a successful outcome is someone clicking “Submit” on a landing page, the algorithm will find the easiest way to generate that click. This often includes bot traffic, accidental clicks on the Display Network, or users who are just looking for free information without any purchase intent. To improve lead quality, you must move your conversion signals further down the sales funnel. Instead of optimizing for a lead, optimize for a qualified lead or a closed deal. This requires a robust integration between your CRM (like HubSpot or Salesforce) and Google Ads. The Power of Offline Conversion Tracking (OCT) Offline Conversion Tracking allows you to feed data back into Google Ads about what happens after the lead is captured. When a sales representative marks a lead as “Sales Qualified” (SQL) or “Closed-Won,” that data is sent back to PMax. Over time, the algorithm learns the profile of users who actually generate revenue, rather than just those who fill out forms. If your data density is low, you may need to move one step up the funnel—perhaps optimizing for “Meeting Booked”—to ensure the AI has enough data points to learn effectively. Enhanced Conversions for Leads Implementing Enhanced Conversions for Leads is another critical step. This feature uses hashed, first-party user data (like email addresses) to provide a more accurate picture of how users convert across devices. By strengthening the connection between the ad click and the eventual conversion, you provide the AI with better-quality data, which leads to better-quality targeting. Leveraging Audience Signals and Customer Match In Performance Max, you don’t target audiences in the traditional sense; instead, you provide “Audience Signals.” These signals serve as a starting point for Google’s AI. If you give the algorithm poor signals, it will start its search in the wrong place. Focusing on High-Value Lists Rather than uploading a generic list of all website visitors, be surgical with your data. Create audience lists based on high-intent actions. For example, a list of users who have “Booked a Demo” or “Requested Pricing” is far more valuable than a list of people who read a blog post. By using these high-value lists as your primary signals, you are training the AI to look for users who exhibit “buyer behavior” rather than “browser behavior.” The Critical Role of Customer Match Customer Match is perhaps the most powerful tool in your audience signal toolkit. By uploading your actual customer database (properly hashed for privacy), you give Google a blueprint of your ideal customer. The algorithm can then use its vast internal data to find “lookalike” users who share similar characteristics, interests, and search patterns with your best clients. In a world without cookies, first-party data like Customer Match is your greatest competitive advantage. Strategic Campaign Settings and Exclusions Precision is the enemy of waste. Performance Max is designed to be expansive, but for lead generation, you need to be restrictive. The following settings are essential “guardrails” that prevent PMax from spending your budget on irrelevant traffic. Implementing Brand Exclusions By default, PMax often targets your own brand terms. While this might make your campaign stats look amazing (because brand traffic converts at a high rate), it often just cannibalizes traffic you would have received anyway through organic search or a dedicated Brand Search campaign. Use the Brand Exclusions feature to ensure PMax is focusing its efforts on finding *new* customers rather than poaching existing ones. Refining Location and Scheduling Don’t assume that every geographic location is equal. Analyze your historical data to identify regions that produce high-quality leads and those that produce spam. You can restrict your PMax campaign to high-performing geos only. Similarly, consider your ad scheduling. If you find that leads coming in between 2:00 AM and 5:00 AM are consistently low quality or bot-driven, exclude those hours. While it might slightly increase your CPA, it will significantly improve the “cleanliness” of your lead pool. Aggressive Negative Keywords and Placements Google has recently made it easier to add negative keywords to PMax campaigns, both at the account level and the campaign level. Use this to your advantage. If you are a B2B software company, you should be aggressively excluding terms like “free,” “jobs,” “salary,” or “cheap.” Additionally, keep a close eye on your placement reports. If your ads are appearing on low-quality mobile apps or “made-for-advertising” websites, add those placements to your exclusion lists immediately. Hardening Your Lead Capture Forms Sometimes the problem isn’t the traffic; it’s the form. If your form is too easy to

Uncategorized

PPC mistakes that humble even experienced marketers

PPC mistakes that humble even experienced marketers In the fast-paced world of digital advertising, experience is often measured by the number of mistakes an account manager has successfully navigated. Even for those who have spent decades managing seven-figure monthly budgets, the complexity of modern ad platforms ensures that there is always a new way to get humbled. Pay-per-click (PPC) advertising is no longer just about picking keywords and setting bids; it is a sophisticated dance with machine learning, automation, and ever-shifting platform interfaces. During a recent SMX Next session, some of the industry’s top minds shared their “war stories.” Greg Kohler, director of digital marketing at ServiceMaster Brands, and Susan Yen, PPC team lead at SearchLab Digital, joined a candid conversation about the pitfalls that still catch professionals off guard. From the dangers of “Friday fever” to the hidden traps in Google’s automation, these insights serve as a vital checklist for any marketer looking to protect their clients’ budgets and their own sanity. The Golden Rule of PPC: Never Launch on a Friday If there is one universal law in the PPC world, it is this: do not launch a new campaign on a Friday afternoon. Despite being common knowledge, this mistake continues to plague the industry. The temptation usually stems from client pressure or a desire to “clear the deck” before the weekend. However, the risks far outweigh the benefits of a few extra days of data. When a campaign goes live, the first 48 hours are critical. This is the period when “ghosts in the machine” tend to appear. If a setting is slightly off—perhaps a daily budget was entered as $1,000 instead of $100, or a decimal point was misplaced in a bid—the error can go unnoticed for two full days. By Monday morning, a significant portion of the monthly budget could be evaporated on irrelevant traffic. The expert consensus is clear: wait until Monday. Even if the campaign is fully built and ready to go by Friday morning, keep it paused. Use Friday for a “fresh eyes” review, or better yet, have a colleague look it over. Overconfidence is a silent killer in PPC; the moment you think you are too experienced to make a clerical error is the moment you are most vulnerable to one. Waiting until Monday ensures that if something breaks, you are at your desk, caffeinated, and ready to fix it immediately. Location Targeting and the Bulk Upload Disaster Location targeting is one of the most fundamental aspects of a PPC campaign, yet it remains a frequent source of “humbling” moments. Greg Kohler shared a specific instance involving Google Ads Editor, a tool designed for efficiency that can occasionally become a liability. When copying campaigns in bulk, certain granular settings—like specific geographic exclusions or radius targeting—may not always carry over as expected. In one notable mishap, a bulk upload resulted in campaigns intended for the United States running across Europe while the domestic team was asleep. By the time the mistake was caught on Saturday morning, the campaigns had already generated 10,000 irrelevant impressions. The platform had defaulted to a broader target when the specific geo-settings failed to sync properly. To mitigate this risk, experts recommend a hybrid approach. While Google Ads Editor is excellent for building the skeleton of a campaign, the final geographic “handshake” should often happen within the browser interface. Explicitly selecting “United States only” (or your specific target region) in the main interface provides a visual confirmation that is often more reliable than a bulk sync. This extra five minutes of manual verification can save thousands of dollars in wasted international spend. The Hidden Dangers of the Search Term Report Trap In an era dominated by Performance Max (PMax) and AI-driven bidding, some marketers have become complacent about reviewing search term reports. Susan Yen emphasizes that this is a critical error. Whether you are running traditional Search campaigns or modern automated ones, the search term report is the only way to truly understand what the user was thinking when they clicked your ad. Neglecting these reports leads to “click chasing”—a state where the algorithm optimizes for clicks that look good on paper but fail to result in qualified leads or sales. The real damage of skipping this step often doesn’t surface for months. Explaining to a client why 40% of their budget went to “free” or “cheap” versions of their high-end service is a conversation no marketer wants to have. The lesson here is to maintain a rigorous schedule. Review search terms at least once a month, regardless of how well the campaign seems to be performing. This practice helps you identify new negative keywords to block and high-performing queries to transition into their own dedicated ad groups. It is about balance; you want to prune the garden without over-restricting the AI’s ability to find new opportunities. The Ongoing Battle: Google Ads Editor vs. The Interface There is a persistent friction between the Google Ads web interface and the desktop Editor tool. Google frequently rolls out new features and settings to the web interface first, often leaving Editor users in the dark for weeks or months. This gap creates a dangerous environment for marketers who rely solely on one tool or the other. SearchLab Digital’s workflow involves building campaigns in Excel first, ensuring ad copy fits character counts and structures are sound. They then move to Editor for the heavy lifting of the upload. However, they stop short of finalizing campaign configurations there. The final “triple-check” always happens in the web interface. Seeing the ads as they will actually appear to the user and seeing the settings in the native environment provides a level of clarity that the spreadsheet-like view of Editor cannot match. For those managing massive accounts—such as franchises with hundreds of locations—Editor is indispensable for spotting inconsistencies. It allows you to see if one location out of five hundred has a different bidding strategy or a missing extension. However, for precision tasks like setting

Uncategorized

The Shift From Search Sessions To Decision Sessions via @sejournal, @DuaneForrester

Understanding the Fundamental Shift in Modern Search For more than two decades, the digital marketing industry has been built on the foundation of the “search session.” This process was predictable: a user entered a query into a search box, was presented with a list of ten blue links, and then clicked through to various websites to gather information. The goal of the SEO professional was to ensure their website appeared as high as possible in that list to capture the click. However, we are currently witnessing a seismic shift in how users interact with information online. Driven by advancements in generative artificial intelligence and large language models (LLMs), we are moving away from traditional search sessions and toward what industry experts, including Duane Forrester, describe as “decision sessions.” In a decision session, the user is no longer looking for a list of resources to navigate. Instead, they are looking for a definitive answer or a completed task within a single interface. This shift represents a transition from a “discovery” model to a “fulfillment” model. Understanding this transition is critical for any brand that wishes to remain relevant in an AI-driven search landscape. What Is a Search Session? To understand where we are going, we must first define where we have been. A traditional search session is characterized by exploration and high friction. When a user wants to buy a new laptop, for example, a typical search session might involve: Searching for “best laptops 2024.” Opening four or five different review sites in separate tabs. Comparing specs, prices, and pros/cons across those sites. Refining the search to “best laptops for video editing.” Eventually clicking through to a retailer to make a purchase. In this model, the search engine acts as a concierge, pointing the user toward various destinations. The intelligence resides with the user, who must synthesize the information gathered from multiple sources to reach a conclusion. For businesses, the “click” is the primary currency of value. Defining the Decision Session A decision session, by contrast, happens within the “answer layer” of the search engine or AI agent. Instead of providing a list of links, the AI aggregates the necessary data, compares the options, and presents a synthesized recommendation directly to the user. Using the same laptop example, a decision session might look like this: The user asks an AI, “Which laptop should I buy for 4K video editing under $2,000 that has at least 32GB of RAM?” The AI immediately parses technical specifications, expert reviews, and current pricing across dozens of retailers. It then presents a single, authoritative recommendation (or a curated comparison) and offers a direct link to buy or even handles the transaction itself. In this scenario, the user never visits a review site. They may never even visit a manufacturer’s landing page. The “decision” was made within the AI interface. This is the “answer engine” in action, and it fundamentally changes the relationship between the brand, the search engine, and the consumer. The Rise of Personal Search and AI Agents The catalyst for this shift is the emergence of “Personal Search” and autonomous AI agents. Traditional search engines treat most users the same, relying on broad signals like location or past search history. Modern AI-driven search, however, is becoming deeply personalized. AI models are beginning to understand user intent on a granular level. They know your preferences, your past purchases, your technical proficiency, and your stylistic tastes. When search becomes personal, the “session” becomes an ongoing conversation rather than a one-off query. This persistent context allows the AI to facilitate decisions much faster than a human could by manually browsing the web. As AI agents become more sophisticated, they will not just provide information; they will execute actions. We are moving toward a future where a decision session involves the AI saying, “I’ve analyzed the best flights for your trip to Tokyo, verified they fit your calendar, and selected the seat you prefer. Should I book it?” The search session has been entirely compressed into a single decision point. Why the “Answer Layer” Is the New Battleground For years, SEOs have focused on “Zero-Click Searches,” where Google provides an answer in a featured snippet. Decision sessions are the logical—and much more powerful—evolution of this trend. The “answer layer” is where the AI synthesizes content to provide a direct response. If your brand’s information is not part of the data set used by the AI to form its answer, your brand effectively does not exist within that decision session. This creates a high-stakes environment where being “on the first page” is no longer enough. You must be the “chosen data source” that informs the AI’s final recommendation. This requires a shift in strategy from optimizing for keywords to optimizing for entities and trust. AI models prioritize information that is structured, authoritative, and verifiable. If the AI cannot confidently parse your data, it will not include your brand in the decision-making process. How to Prepare for the Shift to Decision Sessions The transition to decision sessions does not mean that SEO is dead; rather, it means that the nature of SEO is changing. To thrive in this new environment, businesses must adapt their digital presence to be “AI-friendly.” Here are the core pillars of a decision-session strategy: 1. Prioritize Structured Data and Schema Markup AI models thrive on structured information. While LLMs are excellent at reading unstructured text, they are much more likely to accurately represent your products, services, and prices if they are clearly defined via Schema.org markup. This reduces the “hallucination” risk for the AI and ensures that your technical specs are categorized correctly in the AI’s internal database. 2. Focus on Entity-Based SEO Search engines are moving away from matching strings (keywords) and toward understanding things (entities). Your brand needs to be a recognized entity with clear relationships to other entities. This involves building a robust digital footprint across authoritative platforms, including Wikipedia, industry-specific directories, and high-authority news outlets. The goal is to ensure the

Uncategorized

Is Google Finally Cracking Down On Self-Promotional Listicles? via @sejournal, @lilyraynyc

The Shift in Google’s Search Landscape For years, the “Best of” listicle has been a cornerstone of digital marketing and SEO. Whether a consumer is looking for the best project management software, the top-rated gaming headsets, or the most reliable CRM for small businesses, listicles have dominated the first page of search results. However, a significant shift is occurring. Recent data and observations from SEO experts like Lily Ray suggest that Google is finally taking a harder stance against self-promotional listicles—those articles created by brands that conveniently place their own product or service at the very top of every list. This trend marks a pivotal moment in search engine optimization. As Google refines its ability to distinguish between genuine, expert-led advice and biased marketing collateral, many sites that once relied on self-serving lists are seeing their rankings plummet. The core of this issue lies in trust and the integrity of the user experience. When a user searches for an objective comparison, they expect a fair assessment, not a disguised advertisement. Google’s recent algorithmic adjustments appear to be targeting exactly this discrepancy. The Mechanics of Self-Promotional Listicles To understand why Google is cracking down, we must first define what constitutes a self-promotional listicle. In the SaaS (Software as a Service) and B2B tech world, it became common practice to publish blog posts titled “Top 10 Tools for X.” In almost every instance, the company hosting the blog would list their own product as number one. While this is a logical conversion tactic, it often fails to meet the criteria for high-quality, helpful content. These lists frequently lack objective data, fail to mention competitors’ strengths fairly, and ignore the specific drawbacks of the author’s own product. From an SEO perspective, these pages were designed to capture “middle-of-the-funnel” traffic—users who know what they need but are still deciding on a brand. By dominating these keywords with self-serving content, brands were able to intercept potential customers. However, as Google’s Helpful Content System and Core Updates have become more sophisticated, the algorithm is now better at identifying when a page exists primarily to sell rather than to inform. Ranking Volatility and the Lily Ray Analysis The conversation around this crackdown gained significant momentum following an analysis by Lily Ray, a prominent figure in the SEO community. By tracking major ranking fluctuations following Google’s recent core updates, Ray and other researchers noticed a pattern: sites that heavily utilized biased, self-promotional “reviews” were losing visibility. This volatility wasn’t just a minor dip; for many, it represented a total erasure from the first page of search results. This volatility suggests that Google is re-evaluating the “Review Quality Signals” it uses to rank content. If a site consistently publishes lists where it is the sole beneficiary of the recommendation, Google’s systems may flag that content as lacking independent authority. The algorithm is increasingly looking for “Information Gain”—a concept where a piece of content must provide new, unique value that isn’t already present on a dozen other websites. A self-serving list that repeats the same marketing fluff found on a homepage rarely provides true information gain. Why Trust is the New SEO Currency At the heart of these changes is the concept of E-E-A-T: Experience, Expertise, Authoritativeness, and Trustworthiness. While all four pillars are essential, Trust is arguably the most critical. Google’s Search Quality Rater Guidelines state that trust is the “most important member of the E-E-A-T family.” A self-promotional listicle inherently creates a conflict of interest, which directly undermines the trust factor. When a brand reviews its own product against competitors, the “Experience” and “Expertise” might be present, but the “Trustworthiness” is compromised. Google’s systems are now designed to favor third-party review sites, independent creators, and forums like Reddit or Quora where users share unbiased, first-hand experiences. This explains why we have seen a massive surge in the visibility of “hidden gems” and user-generated content in the SERPs over the last year. Google is prioritizing the voices of real people over the voices of corporate marketing departments. The Impact on Product Review Quality Signals Google has been very transparent about what it considers a high-quality product review. Their official documentation encourages creators to provide evidence such as visuals, audio, or other links of their own experience with the product. They also suggest discussing the benefits and drawbacks of various products based on original research. Self-promotional listicles almost never do this. Instead, they often provide a glowing review of their own product and a superficial, or even outdated, summary of their competitors. As Google integrates more of its Product Review Update logic into the broader core algorithm, these “Review Quality Signals” are being applied to all types of comparative content. If your listicle doesn’t provide a balanced view or transparent criteria for its rankings, it is now a liability rather than an asset. The crackdown isn’t just about the “Best of” keyword; it’s about any content that purports to offer a choice while actually removing the user’s ability to make an informed decision. The Rise of Third-Party Authority One of the most visible consequences of this crackdown is the shifting of traffic to independent review platforms. Sites like G2, Capterra, and TrustRadius have seen significant growth in certain sectors because they aggregate hundreds of individual user opinions. Similarly, niche-specific review sites that do not sell a competing product are being favored because their revenue model (often affiliate-based) is seen as slightly more detached from the product itself, provided they maintain editorial independence. For brands, this means that “off-page SEO” and brand reputation management are becoming more important than ever. You can no longer rely on your own blog to tell the world you are the best. You need other, more authoritative voices to say it for you. Google is effectively forcing brands to move away from self-proclamation and toward earned recognition. How to Adapt Your Content Strategy If your site has been hit by ranking volatility or if you want to bulletproof your strategy against future updates, a change

Scroll to Top