Author name: aftabkhannewemail@gmail.com

Uncategorized

ChatGPT ads expand as more brands and trigger patterns emerge

The Shift in AI Interaction: ChatGPT’s Growing Advertising Ecosystem For nearly two years, ChatGPT existed as a sanctuary from the traditional advertising models that define the modern internet. While Google Search and social media platforms became increasingly saturated with sponsored content, OpenAI’s flagship product offered a clean, conversational interface focused solely on providing answers. However, as the platform matures and the need for sustainable monetization grows, that era is coming to a definitive end. Recent data suggests that ChatGPT is not just experimenting with ads—it is rapidly scaling them. New intelligence from AI ad monitoring firms reveals a significant expansion in the variety of brands appearing within ChatGPT responses. More importantly, we are beginning to see the emergence of predictable “trigger patterns” that dictate when and how these ads are served to users. For digital marketers and SEO professionals, this transition marks a pivotal moment in the history of search marketing. We are moving from the era of Search Engine Results Pages (SERPs) to an era of Conversational AI Advertising. Who is Advertising on ChatGPT? A Look at the Early Adopters The first wave of advertisers inside ChatGPT was relatively small, often limited to testing phases or specific partner integrations. However, the latest reports from Adthena indicate that a diverse range of major brands has now entered the fray. These companies represent various sectors, from telecommunications and electronics to travel and home goods. The brands currently making a visible impact within ChatGPT include: Best Buy: Focusing heavily on consumer electronics and tech-related queries. AT&T: Targeting users looking for mobile upgrades and telecommunications services. Pottery Barn: Capturing intent in the home decor and furniture space. Enterprise: Appearing in travel and transportation contexts. Qualcomm: Positioning itself within broader tech and hardware discussions. Expedia: Reaching users in the planning phase of their travel journeys. The presence of these massive brands suggests that OpenAI is leveraging high-value partnerships to establish its advertising framework. These are not small-scale tests; these are coordinated efforts by industry leaders to claim “share of voice” in a medium that is fundamentally different from traditional search engines. Decoding the Trigger: How Ads Are Served in a Conversation One of the most fascinating aspects of this expansion is the logic behind when an ad appears. In a standard search engine, ads are triggered by specific keywords entered into a search bar. In ChatGPT, the process is more nuanced. Because the interaction is a dialogue, the AI must determine at which point in the conversation a commercial recommendation is relevant and helpful rather than intrusive. Based on an analysis of over 1,500 prompts, several distinct patterns have emerged. Understanding these patterns is essential for any brand hoping to navigate this new landscape. The Power of High-Intent Modifiers Current data indicates that ads are most frequently triggered by specific high-intent modifiers. Phrases like “best,” “new,” and “top-rated” act as powerful catalysts for ad delivery. For example, a user asking “What is the best phone?” is far more likely to see a sponsored recommendation from AT&T or Best Buy than a user asking “How do cell phones work?” This suggests that OpenAI’s advertising engine is currently prioritized for transactional intent. The system is designed to identify when a user is in a “buying state” and provides a sponsored link as a shortcut to a solution. This mirrors the early days of Google AdWords, where the most valuable real estate was reserved for keywords that indicated a readiness to purchase. First-Prompt Dominance vs. Repetition Patterns Interestingly, the timing of ad delivery is not always immediate. While the majority of ads appear on the very first prompt—the initial question a user asks—there is a secondary pattern emerging. Some ads only trigger after the third or fourth repetition of a similar query. This “persistence” factor could indicate a few things about the algorithm. It may be that the AI is programmed to wait for a certain threshold of intent before showing an ad to avoid cluttering more casual conversations. Alternatively, if a user asks for a recommendation multiple times or refines their query with more specific details, the AI may interpret this as a deeper need for a commercial solution, thus triggering the sponsored placement. Keyword Simplification in an AI World While ChatGPT is famous for its ability to understand complex, emotional, and nuanced language, the current ad triggers remain relatively simple. They are grounded in traditional commercial keywords rather than the “vibe” of the conversation. For instance, prompts like “I need a new phone” or “I need to buy a new desk” are reliable triggers. This simplicity is a double-edged sword. For advertisers, it means their existing keyword strategies from search engines may translate reasonably well to AI. For OpenAI, it suggests that their ad delivery system is still in an iterative phase, focusing on clear-cut intent before attempting to solve for more complex or subtle commercial opportunities. The Battle for Share of Voice: The Best Buy Example In traditional search, a brand might aim for the top position on a page filled with ten blue links and several sponsored snippets. In ChatGPT, the “real estate” is much more limited. Often, there is only room for one or two recommendations within a single response. This makes the concept of “Share of Voice” (SOV) more critical than ever. Recent observations have shown instances where a single brand, such as Best Buy, secured two distinct ad placements within a single response for iPhone-related queries. This is a significant development. It suggests that if a brand’s bid and relevance score are high enough, they could potentially monopolize the commercial space within a specific conversation, leaving no room for competitors. This “winner-takes-most” environment will likely lead to intense competition among top-tier brands. In a conversational interface, being the second or third recommendation is far less valuable than it is on a traditional search page, as users are less likely to scroll through a list and more likely to click the first helpful link provided by the AI. Why the

Uncategorized

4 Sites That Recovered From Google’s December 2025 Core Update – What They Changed via @sejournal, @marie_haynes

The Impact of the December 2025 Core Update Google’s algorithm updates are a constant source of anxiety for digital publishers, SEO professionals, and business owners. The December 2025 Core Update was particularly significant, marking a shift in how the search engine evaluates the intersection of artificial intelligence, user experience, and content depth. While many sites saw a decline in visibility, a handful of publishers managed to orchestrate a successful recovery by identifying systemic issues and implementing rigorous, data-driven changes. Understanding these success stories is critical for any webmaster looking to survive in the current search landscape. The December 2025 update focused heavily on refining “Helpful Content” signals, specifically targeting sites that leveraged excessive AI-generated content without human oversight or those that failed to demonstrate genuine Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T). By examining four specific sites that turned their fortunes around, we can extract a blueprint for long-term SEO resilience. Site 1: The Niche Product Review Platform The first site in our analysis is a medium-sized affiliate site dedicated to high-end electronics and gaming hardware. Before the December update, this site had relied on a high volume of product roundups that followed a standard “Top 10” format. When the update hit, their organic traffic dropped by nearly 45%, with many of their primary keywords falling from the first page to the third or fourth. What Went Wrong? The site had fallen into the trap of “thin” affiliate content. While the writing was technically competent, it lacked original testing data. Google’s 2025 systems became much better at detecting whether a reviewer had actually handled a product or was simply rewriting specifications found on manufacturer websites. The site’s reliance on stock imagery and generic descriptions flagged it as low-value in the eyes of the December update. The Recovery Strategy To recover, the editorial team pivoted toward a “hands-on first” policy. They stopped publishing 20 articles a month and scaled back to five high-quality, long-form reviews. Each new piece of content included original high-resolution photography of the product being used, custom benchmarking charts, and a section dedicated to “Why you should trust us,” detailing the specific testing environment used for the hardware. Furthermore, they cleaned up their older content. Instead of deleting underperforming pages, they merged similar reviews into comprehensive “Buyer’s Guides” that provided much more context. By late spring 2026, the site had not only regained its pre-December traffic but exceeded it by 15%, proving that quality and original data are the ultimate defenses against core update volatility. Site 2: The Local News and Lifestyle Portal The second site is a regional news outlet that provides local reporting, lifestyle tips, and event coverage. This site suffered a significant blow in December 2025, primarily affecting its lifestyle and “evergreen” informational articles. Their news reporting remained relatively stable, but their broader guides on topics like “How to find the best mortgage rates” or “Healthy eating on a budget” plummeted. The Core Issues The primary issue for this publisher was “topic dilution.” By trying to cover national-level financial and health topics to capture high-volume search traffic, they lost their topical authority as a local news source. Google viewed these lifestyle pieces as lacking the specific expertise required for “Your Money or Your Life” (YMYL) topics, especially when compared to dedicated financial or medical journals. The Roadmap to Recovery Recovery for this site involved a ruthless pruning of content. The editorial team removed or “no-indexed” hundreds of generic lifestyle articles that had no local relevance. They then doubled down on their core strength: local expertise. They began creating deep-dive content focused on the local economy, local real estate trends, and community-specific health resources. They also improved their author transparency. Every article was updated with a robust author bio that linked to the writer’s LinkedIn profile, previous work history, and professional credentials. This signaled to Google that the content was being produced by real journalists with a history in the field. By aligning their content strategy with their established niche, the site regained its ranking for local search terms within four months. Site 3: The E-commerce/Affiliate Hybrid Site number three operates in the home improvement and DIY space, selling both direct products and offering curated recommendations for specialized tools. This site was hit hard not because of content quality, but because of technical debt and a confusing user experience (UX). The December 2025 update placed a renewed emphasis on the “Trust” element of E-E-A-T, which includes the technical security and usability of a site. Identifying the Obstacles A deep audit revealed that the site had intrusive interstitial ads that hindered the mobile experience. Additionally, the site’s navigation was cluttered, making it difficult for users to find the information they needed without clicking through multiple layers of “fluff” content. The core update essentially penalized the site for prioritizing ad revenue over the user’s journey. Implementing the Fix The recovery process was largely technical. The site owners overhauled the mobile UI, removing intrusive pop-ups and streamlining the checkout process. From a content perspective, they implemented “Product Schema” more effectively, providing Google with clear data about price, availability, and user reviews. They also addressed the “helpfulness” of their product pages. Instead of just listing features, they added FAQ sections based on real customer inquiries and integrated “how-to” videos that showed the tools in action. By improving the utility of the site, they lowered their bounce rate significantly. Google’s algorithms recognized this improved user engagement, leading to a steady recovery in rankings throughout the first half of 2026. Site 4: The Tech Tutorial and Software Blog The final site in our case study is a tutorial-heavy blog focusing on software development and AI tools. This site saw a sharp decline in December 2025, which was surprising given its high technical accuracy. However, a closer look revealed that the site was a victim of the “AI Content Overload” filters that Google refined during this update. The AI Trap While the site’s tutorials were accurate, they were structured in a way that

Uncategorized

How to use Google Ads Performance Planner and Reach Planner

How to use Google Ads Performance Planner and Reach Planner When most digital marketers navigate to the “Tools and Settings” menu in Google Ads and hover over the “Planning” section, their cursor almost instinctively moves toward the Keyword Planner. It is a foundational tool that has served the industry for decades. However, stopping there means leaving some of Google’s most powerful predictive technology on the table. Tucked away in that same menu are two sophisticated engines: Performance Planner and Reach Planner. As the digital advertising landscape shifts from manual bidding toward automation and machine learning, the ability to forecast results becomes more critical than ever. Advertisers can no longer afford to “guess and check” with their monthly budgets. Performance Planner and Reach Planner allow you to move from a reactive strategy—where you adjust budgets based on last month’s performance—to a proactive strategy where you model future outcomes based on real-time data trends. This guide explores how to master both tools to scale your account effectively. Why Performance Planner matters for scaling search and display Performance Planner is a forecasting tool that allows you to see how changes in your ad spend might affect key metrics like conversions, clicks, and conversion value. It is essentially a “what-if” machine. If you increase your budget by 20%, will your Cost Per Acquisition (CPA) remain stable, or will you hit a point of diminishing returns? Performance Planner attempts to answer these questions using billions of search queries and auction data points. The primary value of this tool lies in its ability to simulate the auction environment. Google’s algorithms look at historical data and the current competitive landscape to predict how your specific campaigns would perform under different budget constraints. This is particularly useful for account managers who need to justify budget increases to stakeholders or clients. Instead of saying, “I think we should spend more,” you can present a data-backed forecast showing a projected 15% increase in conversions for a 10% increase in spend. Performance Planner is not a one-time setup tool; it is designed for ongoing optimization. Google typically refreshes the forecasts daily, pulling from the most recent 7 to 10 days of data. This ensures that the projections account for sudden shifts in market demand, seasonal trends, or competitor behavior. Recently, Google added “Suggested Plans” to the dashboard, which identifies low-hanging fruit—campaigns where a slight budget or bid adjustment could lead to a significant performance lift without the user having to build a manual model from scratch. How to create a new performance plan Building your first performance plan is a straightforward process, but it requires careful selection of parameters to ensure the output is actionable. To begin, navigate to Tools → Planning → Performance Planner and click “Create new plan.” Step 1: Set Your Foundation Once you start a new plan, you will be prompted to select the timeframe and the specific dates you want to forecast. While you can choose custom dates, many advertisers align their plans with monthly or quarterly fiscal cycles. You will also need to choose the channel—Search, Display, or Shopping. If your account is multi-channel, you will see various options here. Step 2: Choose Your Metrics and Targets The tool asks you to define what success looks like. You can select key metrics such as Conversions, Conversion Value, or Clicks. Furthermore, you can input a specific target, such as a Target CPA (Cost Per Action) or a specific spend limit. This is crucial because it tells the algorithm to prioritize efficiency (maintaining a specific CPA) or volume (maximizing conversions within a budget). Step 3: Select Eligible Campaigns You must choose which campaigns to include in the plan. It is often best to group campaigns with similar goals together. For example, grouping all your “Brand Awareness” campaigns in one plan and “Direct Response” campaigns in another allows for cleaner data. Note that not every campaign will be eligible; Google requires a certain threshold of historical data and specific bid settings for its predictive models to work accurately. In some cases, Google may suggest a $0 budget for certain campaigns within your plan. This isn’t a glitch. It usually indicates that the campaign is currently so inefficient compared to others in the group that the algorithm believes your total budget would be better spent elsewhere. This insight alone can help you trim waste and reallocate funds to high-performing areas. Campaign eligibility and limitations to know To ensure the accuracy of its forecasts, Google Ads enforces strict eligibility criteria. If your campaigns are missing from the selection list, they likely haven’t met one of the following requirements. These rules differ significantly between Search and Shopping campaigns. Search campaigns For Search campaigns to be eligible for Performance Planner, they must meet the following technical standards: Bid Strategy: The campaign must use a supported strategy such as Manual CPC, Enhanced CPC, Maximize Clicks, Maximize Conversions, Maximize Conversion Value, Target ROAS, Target CPA, or Target Impression Share. Crucially, you must not have changed the bid strategy in the last 7 days. Longevity: The campaign must have been active for at least 72 hours. Activity Density: To provide a statistically significant forecast, the campaign must have received at least 3 clicks and 3 conversions in the last 7 days. Budget Constraints: For Target Impression Share campaigns, the “Search lost IS (budget)” must be less than 5% over the last 10 days. Shopping campaigns (Standard) Standard Shopping campaigns have their own set of prerequisites to ensure the Product Feed data and auction history are sufficient for modeling: Bid Strategy: The campaign cannot be part of a portfolio bid strategy. Spend Consistency: It must have been active every day for the last 10 days with a minimum spend of at least $10 USD per day. Traffic Volume: The campaign needs at least 100 impressions in the last 7 days. Conversion History: There must be at least 10 conversions or conversion values recorded in the last 10 days. Budget Health: The campaign status cannot be

Uncategorized

How to use AI response patterns to build better content

The digital landscape has undergone a seismic shift over the last year. For SEO professionals and content strategists, the focus has moved from simply ranking on page one of Google to understanding how to maintain visibility in a world dominated by Large Language Models (LLMs) and AI-driven search results. The central question many are asking is: How do we report on AI visibility, and what does it actually take to be cited by platforms like ChatGPT, Claude, Gemini, and Google’s AI Overviews? Recent research has complicated this mission. A study by Rand Fishkin at SparkToro regarding AI response variability has sent ripples through the marketing community. The data suggests that LLM outputs are nowhere near as stable or predictable as traditional search engine rankings. This inconsistency makes AI visibility a difficult KPI to track using old-school methods. However, rather than viewing this variability as a roadblock, savvy content creators are beginning to view it as a goldmine of data. By shifting focus from “rank tracking” to “pattern analysis,” you can use AI responses to build a more robust, authoritative content strategy. Understanding the Instability of AI Recommendations The SparkToro study revealed a startling reality: there is less than a 1 in 100 chance that ChatGPT or Google’s AI will return the exact same list of brands or products across two different sessions, even when the prompt is identical. Researchers analyzed thousands of prompts across multiple LLMs to highlight this extreme level of variance. For a CMO looking for a steady “rank” to report to the board, this is a nightmare. For a content strategist, it is a signal that the rules of engagement have changed. Traditional search engines are deterministic to an extent; they use a relatively stable set of ranking factors (backlinks, technical health, content relevance) to produce a list of results that remains fairly consistent for a period of time. LLMs, conversely, are probabilistic. They don’t “rank” websites in a database; they predict the next best word based on a massive web of associations and the specific context of the user’s prompt. Because these models use context windows and varying levels of “temperature” (the setting that controls randomness in the output), they synthesize information differently every time. This means that rank tracking at scale, while not useless, is often misapplied. Instead of treating an AI citation as a fixed position on a leaderboard, we must treat it as a data point in a larger behavioral pattern. The Shift from Traditional SEO to AI Pattern Analysis In the traditional SEO world, we are experts at reverse engineering. We look at the top three results for a keyword, analyze their backlink profiles, word counts, and header structures, and then try to create something better. AI search requires a similar mindset but a different methodology. We are no longer reverse engineering a static algorithm; we are reverse engineering the way a model synthesizes human knowledge. The goal of AI pattern analysis is to understand the “conceptual consensus” the model has reached about a specific topic. If you ask a model about a topic 50 times and it mentions a specific feature 45 times, that feature is a fundamental component of the model’s understanding. If your content doesn’t mention that feature, you are effectively invisible to the model’s synthesis process. Traditional SEO AI Pattern Analysis Measures specific rankings and positions. Understands how concepts are synthesized. Focuses on content gap analysis (keywords). Focuses on topic associations and entities. Deals with fixed, relatively stable SERPs. Deals with dynamic, probability-based responses. Relies on determined signals like backlinks. Relies on semantic relevance and probability. To find a pattern, you don’t need the AI to say the exact same thing every time. You are looking for themes, structures, and recurring topics. A reliable pattern can be defined by three main criteria: The element appears in 75% or more of the model’s outputs. The element appears across at least two different models (e.g., GPT-4 and Gemini). The element remains consistent across multiple iterations of the same prompt cluster. While the 75% threshold isn’t a hard scientific rule, it serves as a practical benchmark to separate meaningful insight from random noise. If “pricing transparency” appears in nine out of twelve responses, that isn’t a fluke—it’s a requirement for relevance. The Three-Pillar Framework for Pattern Analysis To effectively use AI response patterns, you need a structured way to categorize what you are seeing. You can break these down into three distinct types of patterns: Structural, Conceptual, and Entity. 1. Structural Patterns Structural patterns refer to how the AI chooses to organize the information it provides. LLMs are trained on massive amounts of high-quality content, and they often default to structures that humans find most helpful. By identifying these, you can align your own content formatting with what the AI perceives as the “ideal” way to answer a query. When analyzing structural patterns, look for: Section Frequency: Does the AI always start with a definition before moving to a list of tools? Formatting Consistency: Does it prefer bulleted lists, numbered steps, or comparison tables? Framing: Does the model typically use a “Pro/Con” approach or a “Decision Framework” style? For example, if you notice that every time you ask an AI “how to implement a new CRM,” it follows a structure of Definition > Criteria > Tools > Implementation Steps, that is a strong signal. If your blog post on CRM implementation skips the “Criteria” section, you might be missing a piece of the puzzle that the AI deems essential for a complete answer. 2. Conceptual Patterns Conceptual patterns are the themes and subtopics that the model associates with your primary keyword. These are the “must-have” ideas that build authority in the eyes of an LLM. This is where you can identify what users care about most, as the AI’s training data reflects broad human intent. Let’s use the example of “Best domain registrars.” If you run this prompt through multiple models, you might see the following concepts appearing repeatedly: Pricing

Uncategorized

4 strategic paid search pivots to survive Google’s AI Overviews

The New Reality of Search: Navigating the AI Era Google’s search architecture has undergone its most radical transformation since the introduction of mobile search. The integration of AI Overviews (formerly known as the Search Generative Experience) has shifted Google from a library of links to an “answering engine.” For search engine marketers, this transition isn’t just about organic rankings; it is fundamentally altering the mechanics of paid search. AI Overviews now appear across a vast spectrum of search results, but their penetration varies significantly by industry and query length. Data from Adthena highlights a stark reality: in the Finance sector, AI Overviews are triggered on 79% of long-tail queries (five or more words). In Retail, the visibility is even higher, reaching 84% for comparison and product discovery queries in the 9-10 word range. Even in Healthcare, where Google has traditionally been cautious, AI Overviews are appearing even for short, 1-3 word medical questions. For the modern advertiser, the challenge is clear. While organic traffic faces significant headwinds, the downstream impact on paid search is often underestimated. AI Overviews are not just moving ads down the page; they are changing how users interact with the results, how much those clicks cost, and what happens after the click occurs. To survive and thrive in this environment, advertisers must move beyond traditional keyword management and embrace four specific strategic pivots. AI Overviews’ impact on paid search The introduction of AI Overviews is accelerating several structural trends that were already reshaping the digital advertising landscape. We are seeing a convergence of SERP (Search Engine Results Page) saturation, the expansion of broad match keywords, and the near-total adoption of automated bidding through systems like Performance Max. However, the speed of the AI rollout has compressed a transition that might normally have taken years into a matter of months. To build a resilient strategy, we must first dissect exactly how these AI answers are impacting campaign performance. The impact is not uniform; it touches click-through rates, auction dynamics, and the very structure of the buyer’s journey. AI Overviews drive lower response rates The most immediate and visible impact of AI Overviews is the decline in click-through rates (CTR). When Google provides a comprehensive answer at the top of the page, the incentive for a user to click an ad—or any link—diminishes. Recent data from Seer Interactive paints a sobering picture of this decline. Between June 2024 and September 2025, paid CTR on queries featuring AI Overviews plummeted by 68%, falling from an average of 19.7% to a mere 6.34%. While organic CTR also fell by 61% on these same queries, the steeper decline in paid performance suggests that AI Overviews are doing more than just taking up space. They are reshaping user intent. The most dramatic collapse occurred in July 2025, when paid CTR on certain queries fell from 11% to 3% in just thirty days. This coincided with Google’s aggressive expansion of AI answers into commercial and navigational queries. However, it is important to note a distinction in query type. Non-branded, informational queries—those where the user is looking for a definition, a guide, or a simple fact—have seen the most severe declines. Conversely, branded search and high-intent transactional queries have shown much more resilience. When a user is ready to buy, an AI summary is a hurdle they are willing to jump over to reach their destination. AI Overviews contribute to higher CPCs through inventory compression Economics dictates that when supply decreases and demand remains constant or increases, prices rise. AI Overviews are effectively compressing ad inventory. By pushing ads further down the page or integrating them directly into the AI interface, Google has reduced the number of “premium” slots available for traditional search ads. In the first quarter of 2025, Google Search spending grew by 9% year-over-year, yet click growth only managed a 4% increase. This 5% gap indicates that more advertisers are chasing a shrinking pool of clicks, driving up the Cost Per Click (CPC). AI Overviews amplify this inflation. Research into ad positioning shows that while ads appearing above an AI Overview still maintain reasonable performance, ads appearing below the AI block see a catastrophic drop in impression share and CTR. Furthermore, Google’s automated bidding systems are designed to optimize for conversions rather than cost efficiency. As click inventory shrinks, these systems bid more aggressively to secure the remaining high-intent traffic, leading to premium CPCs. This environment favors larger advertisers with bigger budgets, as double-serving policies and auction dynamics tend to concentrate visibility among those who can afford the rising “entry fee” for the top spot. AI Overviews collapse the consideration phase The traditional marketing funnel—moving from awareness to consideration and finally to conversion—is being flattened. AI Overviews act as a research assistant, performing the comparison and synthesis tasks that used to take users multiple searches and several days to complete. This is known as “journey compression.” Consider a search for “best project management software for remote teams.” In 2023, a user would likely click three different ads, read four organic blog posts, and visit several vendor sites over a two-week period before signing up for a trial. Today, an AI Overview can present a comparison table of features, pricing tiers, and pros/cons directly on the search page. A user can now move from “I need software” to “I am signing up for this specific software” in a single session. This compression has three major side effects for advertisers: 1. Smaller Retargeting Pools: Because users are finding answers on the SERP without clicking through to websites, your remarketing audiences are shrinking. Even though Google has lowered Customer Match minimums to 100 users to help smaller businesses, a campaign that once built a 10,000-person audience from informational traffic might now only capture 3,000. 2. Reduced Brand Exposure: If a user makes a decision based on an AI-generated comparison, they haven’t experienced your website’s UX, your specific brand voice, or your full marketing message. They enter the purchase phase with a “sterile” understanding

Uncategorized

ChatGPT ecommerce traffic converts 31% higher than non-branded organic search

The Changing Landscape of Digital Commerce The ecommerce world is currently witnessing a significant shift in how consumers discover, research, and ultimately purchase products. For decades, Google’s non-branded organic search has been the primary engine driving new customer acquisition. However, as generative artificial intelligence matures, a new contender has emerged in the referral traffic landscape. Recent data from a comprehensive 12-month study indicates that ChatGPT is not just a tool for writing emails or generating code; it is becoming a highly effective funnel for ecommerce conversions. According to a detailed analysis of GA4 data across 94 major ecommerce brands in 2025, traffic originating from ChatGPT converts at a rate 31% higher than traditional non-branded organic search. This finding marks a critical milestone in the evolution of Search Engine Optimization (SEO) and highlights a fundamental change in user behavior. While the total volume of traffic from AI platforms remains a fraction of what Google provides, the quality of that traffic is proving to be vastly superior in terms of purchase intent. Deconstructing the 31% Conversion Advantage The core of the study, conducted by Visibility Labs, focused on 9.46 million non-branded organic sessions compared to 135,000 ChatGPT referral sessions. The results were striking. ChatGPT traffic converted at an average rate of 1.81%, while non-branded organic search trailed behind at 1.39%. This 31% gap was not a one-time fluke; ChatGPT outperformed traditional organic search in 10 out of the 12 months analyzed throughout 2025. To understand why this gap exists, we must look at the concept of “intent compression.” When a user searches for a product on a traditional search engine, they are often at the beginning of their journey. They may see a list of links, click through several tabs, and compare features across multiple sites. This process is manual and often leads to a high “bounce” rate as users hop from one site to another looking for information. In contrast, a user interacting with ChatGPT is often engaging in a deep-dive dialogue. They might ask the AI to “find a waterproof hiking boot suitable for wide feet under $200 with good ankle support.” By the time ChatGPT provides a link to a specific product, the AI has already filtered out irrelevant options. The user has done their research within the chat interface. Consequently, when they finally click through to an ecommerce site, they are significantly closer to the “buy” button than a user who just searched for “best hiking boots.” The Phenomenal Growth of AI Referral Traffic While the conversion rate is high, the most visible trend in 2025 was the sheer velocity of traffic growth. At the start of the year in January, the 94 ecommerce sites in the study saw only 1,544 sessions attributed to ChatGPT. By December, that number had skyrocketed to 18,202 sessions. This represents a staggering 1,079% increase in traffic volume over a single calendar year. In comparison, non-branded organic search grew by a modest 17% during the same period. While organic search still commands the lion’s share of total traffic, the growth trajectory of AI-driven referrals suggests that we are in the early stages of a major channel shift. In the first half of 2025, ChatGPT traffic was essentially a statistical outlier for many brands. By the second half of the year, it had become a measurable contributor to the bottom line, narrowing the gap with traditional search from a 70x difference to a 47x difference by the fourth quarter. Revenue Per Session and the Average Order Value Paradox The financial metrics behind the traffic reveal a nuanced story. Interestingly, the Average Order Value (AOV) for ChatGPT users was actually lower than that of organic search visitors. The data showed an AOV of $204 for ChatGPT referrals versus $238 for organic search—a 14.3% difference. One might assume that lower AOV signifies “lower quality” customers, but the Revenue Per Session (RPS) tells the opposite story. Because ChatGPT traffic converts so much more efficiently, it generated $3.65 per session, compared to $3.30 for organic search. This 10.3% advantage in RPS proves that even if AI-driven shoppers are spending slightly less per transaction, they are far more likely to complete a transaction in the first place. The lower AOV might be attributed to the specific nature of AI recommendations. LLMs (Large Language Models) often provide highly specific, utilitarian recommendations that fit a user’s exact criteria—including price points. If a user asks for a specific solution, they may buy exactly what they need without the “aisle wandering” or upsell exposure that occurs during a traditional, broader search session. The Role of Product Updates: From Chat to Shopping The surge in ChatGPT’s performance in 2025 wasn’t just a matter of user adoption; it was driven by specific technological updates from OpenAI. A significant spike in traffic was observed in April 2025, coinciding with the introduction of enhanced shopping carousel features within the ChatGPT interface. These carousels allowed the AI to present products visually, with prices and direct links, mirroring the functionality of Google Shopping but within a conversational context. This update transformed ChatGPT from a text-based advisory tool into a visual discovery engine. However, the data also shows that this growth began to flatten around August 2025. This suggests a period of “market normalization” where the early adopters have integrated AI search into their habits, and the platform reached a temporary plateau in how it surfaces commercial links. For ecommerce managers, this highlights that AI traffic is highly sensitive to the UI/UX changes made by the AI providers themselves. Addressing the Attribution Gap: The “Dark Funnel” of AI One of the most critical takeaways for digital marketers is that the current GA4 data likely underrepresents the true impact of ChatGPT on ecommerce revenue. The study points to a significant “attribution gap” that occurs during the customer journey. When a user discovers a brand through ChatGPT, they don’t always click the direct link provided in the chat. Many users utilize ChatGPT as a research assistant to narrow down their choices

Uncategorized

Google expands AI Max text guidelines globally

Google expands AI Max text guidelines globally The landscape of digital advertising is undergoing a fundamental transformation as artificial intelligence moves from a supportive role to the core of creative production. Google has announced a significant milestone in this evolution by expanding beta access to text guidelines globally for advertisers using AI Max. This update, which applies to both Search and Performance Max campaigns, represents a pivotal shift in how brands interact with generative AI, moving away from a “black box” approach toward a more collaborative, human-steered model. As Performance Max and AI-driven Search campaigns become the industry standard, the primary concern for marketers has shifted from “How do I use AI?” to “How do I control AI?” The global rollout of these text guidelines provides an answer, offering a sophisticated layer of governance that ensures AI-generated ad copy remains strictly aligned with brand identity, tone, and legal compliance. The Shift Toward Brand-Centric AI Automation For years, digital marketers have relied on Google’s machine learning algorithms to optimize bidding and targeting. However, the creative aspect—the actual words that appear on a user’s screen—has traditionally been the domain of human copywriters. With the introduction of generative AI tools within Google Ads, the speed of creative production increased exponentially, but it brought a new challenge: brand drift. AI-generated creative can occasionally lean into generic tropes, use language that is too informal, or emphasize value propositions that conflict with a brand’s premium positioning. Google’s expanded text guidelines aim to solve this by allowing advertisers to provide natural-language instructions. This means that instead of just letting the AI “guess” the best copy based on a landing page, marketers can now set proactive guardrails. The global expansion of this beta feature means that advertisers across all industries and in all supported languages can now define exactly how they want their brand to be represented. This isn’t just about catching errors; it is about scaling brand-perfect messaging across thousands of different ad permutations simultaneously. How AI Max Text Guidelines Work The core functionality of these text guidelines lies in their simplicity and accessibility. Unlike previous iterations of ad technology that required complex settings or Boolean logic, these guidelines use natural language processing (NLP). Advertisers can essentially “talk” to the Google AI to shape the creative output. When setting up or refining an AI Max campaign, advertisers can enter specific instructions such as “avoid using the word ‘cheap’” or “ensure all headlines emphasize sustainability.” The AI then processes these instructions and applies them as a filter or a creative template for all generated assets. Key Features of the Text Guidelines The global expansion includes several critical updates designed to give advertisers more granular control: Natural Language Instructions: You can use conversational prompts to steer the AI’s creative direction. Exclusionary Guidelines: Marketers can explicitly list terms, phrases, or concepts that the AI should never use. This is vital for industries with strict regulatory requirements or brands that have specific competitive sensitivities. Tone and Voice Alignment: Guidelines can specify whether a brand should sound professional, witty, urgent, or empathetic, ensuring that the AI doesn’t default to a generic “marketing” voice. Language and Vertical Support: The feature now supports a full range of languages and business verticals, making it a viable tool for international conglomerates and local businesses alike. The Strategic Importance of Brand Safety and Control In the current digital ecosystem, brand safety is about more than just where an ad appears; it is also about what the ad says. A single off-brand headline can erode years of brand building. This is why the global expansion of text guidelines is being met with enthusiasm from performance marketing agencies and in-house brand teams. As AI-powered creative becomes central to performance marketing, the risk of “creative drift” increases. Without guidelines, AI models might optimize for clicks at the expense of brand integrity. For example, a luxury watch brand might see high click-through rates (CTR) by using language like “Lowest Price Guaranteed,” but such language would be devastating to its premium market positioning. By using the new guidelines, that same luxury brand can set a rule: “Never use language implying low cost or discounts; focus on craftsmanship and heritage.” This ensures that the campaign achieves its performance goals without sacrificing the brand’s core identity. Case Study: BYD and the Impact of Human-Guided AI The effectiveness of these guidelines is already being documented by early adopters. The automotive manufacturer BYD (Build Your Dreams) participated in the initial testing phases and saw remarkable results. By combining the speed of Google’s AI creative tools with specific human-guided safeguards, BYD was able to maintain a consistent brand voice while scaling their ad efforts across multiple regions. The results were twofold: BYD saw a higher volume of qualified leads and a significant reduction in cost per lead (CPL). This case study highlights a crucial point: AI performs best when it has a clear objective and a well-defined set of boundaries. When the AI doesn’t have to “guess” what is off-limits, it can focus its computational power on finding the most effective ways to communicate within the approved brand framework. Best Practices for Implementing Text Guidelines To make the most of Google’s expanded text guidelines, advertisers should approach their instructions with clarity and strategic intent. Here are several best practices for setting up effective AI guardrails: 1. Define Your “Never” List Start by identifying words or phrases that are strictly prohibited. These might include competitor names, slang that doesn’t fit the brand, or terms that are legally sensitive in your industry (such as specific medical or financial claims). 2. Specify Brand Pillars Instead of just telling the AI what not to do, tell it what to prioritize. If your brand is built on “innovation” and “reliability,” include those as core instructions. This helps the generative AI weight those concepts more heavily when drafting copy. 3. Use Natural Language, Not Code Google’s AI is designed to understand context. You don’t need to use rigid commands. Instead, use descriptive

Uncategorized

Google’s spam update vs. AI affiliate sites: An SEO experiment

The Changing Landscape of Search and Affiliate Marketing For over a decade, the playbook for affiliate marketing felt relatively straightforward. If you could identify a niche with decent search volume, secure a partial-match domain (PMD), and structure your headings to mirror commercially intended queries, you were halfway to a profitable venture. In those days, ranking for high-value terms like “best cordless drills” or “top-rated insurance providers” was less about the depth of your soul and more about the precision of your keyword placement. When these technical basics were paired with conversion-optimized widgets and a steady stream of “freshness” updates—minor tweaks to dates and introductory paragraphs—the results were often staggering. It was not uncommon for savvy SEOs to earn tens of thousands of dollars in monthly affiliate revenue with minimal ongoing maintenance. The system rewarded those who could play the algorithm like a finely tuned instrument. However, as AI tools have commoditized content production, Google has been forced to evolve its detection systems at an unprecedented pace. Today, the barrier to entry for content creation has effectively vanished. This has led to a fundamental question: Can the old-school high-scale, low-trust tactics still survive in an era defined by Google’s aggressive spam updates and the “Helpful Content” era? To find out, a rigorous experiment was conducted to pressure-test the resilience of AI-driven affiliate sites against Google’s increasingly sophisticated defenses. Pressure-testing Google’s spam update Before diving into the experiment, it is important to establish a baseline. In the world of SEO, we often talk about “white hat” versus “black hat,” but the reality is usually a spectrum of “gray.” To understand the true impact of Google’s updates, one must first look at what the search engine considers the gold standard of publishing. For several months prior to the experiment, I worked on scaling an affiliate initiative for a legacy website in a highly sensitive “Your Money or Your Life” (YMYL) category. This project followed every best practice in the book. We didn’t just generate text; we hired subject matter experts (SMEs) to craft educational, deeply researched content that truly informed the reader. While the ultimate goal was to capture commercial intent, the site’s existence wasn’t predicated solely on affiliate links. The site featured thousands of pages of organic user-generated content (UGC) that had been accumulated over years. This UGC served as a foundation of trust, providing original research and real-world expert insights. The technical architecture was sound, the brand trust was established, and the backlinks were earned, not bought. It was a perfect ecosystem: legacy authority combined with a helpful commercial layer that served a pre-existing demand. This was the control group—the “trust-heavy” model. The experiment: Scaling AI without trust If the first model was built on the bedrock of trust and earned authority, the experiment was designed to do the exact opposite. The goal was to remove every signal of human expertise and brand reputation to see if the technical scaffolding of SEO alone could still support a profitable site. During this period, the SEO landscape on platforms like LinkedIn and X was flooded with “AI influencers” claiming that one could generate thousands of pages by simply scraping public data and rewriting it with Large Language Models (LLMs). To test this, I purchased three domains that were intentionally chosen to match high-value vocational queries: “best welding schools,” “best plumbing schools,” and “best electrical schools.” These are classic “bottom-funnel” niches where a single lead can be worth a significant amount to the educational institutions involved. The experiment was designed to be low-trust and high-scale. Using a combination of modern tools, the build process looked like this: The Technical Stack and Methodology The websites were built using a modern, aesthetic design to mimic the “look” of a professional directory. I utilized a “vibe-coded” Python API call to fetch public data regarding trade school locations, certification programs, and tuition costs. This data was then fed into ChatGPT, which was used to template the subheadings and paragraph text for thousands of pages. Within just a few hours, I had deployed a massive directory of school pages across all three domains. The strategy utilized “liquid content,” allowing me to programmatically inject public data into templated frameworks. I targeted superlatives by program type and by state (e.g., “Best Welding Schools in Ohio”) and created individual pages for every school in the database. To maximize the internal “link juice,” I implemented aggressive internal linking structures that focused on ensuring every page was crawled, often at the expense of a logical user journey. The setup was a textbook violation of Google’s long-term trust signals. The sites shared several specific traits that made them prime targets for a spam update: Zero Brand Signals: No social media presence, no “About Us” page with real humans, and no history of brand searches. Programmatic AI Content: The text was clean but lacked any “experience” or unique perspective. Public Data Aggregation: There was no original data; it was simply a reorganized version of information already available on government and education websites. Aggressive Linking: A spiderweb of internal links designed to manipulate crawl depth. No Authorship: No recognized experts or subject matter authorities were attached to the content. Confirmed: The data shows Google’s spam updates work In the beginning, the experiment seemed to suggest that Google was still vulnerable to these tactics. The indexation was remarkably fast. Within days, the AI-generated pages began surfacing for long-tail queries. Because the niches were specific (local trade schools), the competition was low enough that even “thin” content could find its way to the first page. Within the first two months, the sites were performing surprisingly well. Each of the three websites was generating roughly 200 in-market clicks per month. For a project that took only a few hours to set up and cost less than $100 in domains and API fees, the ROI looked promising. But this success was short-lived. The December Downturn The turning point arrived with the first major December spam update following the sites’ launch.

Uncategorized

What industry data reveals about the impact of Google’s AI Overviews on paid search by Adthena

What industry data reveals about the impact of Google’s AI Overviews on paid search by Adthena The landscape of search engine marketing has undergone a seismic shift. What began as an experimental project under the Search Generative Experience (SGE) umbrella has now matured into a permanent, influential fixture of the search engine results page (SERP): Google’s AI Overviews (AIOs). For digital marketers and paid search specialists, the transition from standard blue links to generative AI summaries represents more than just a visual update; it is a fundamental restructuring of how users interact with information and how brands capture attention. To quantify this impact, Adthena conducted a comprehensive analysis of search data from late December 2025 through January 2026. This study tracked the performance metrics of hundreds of thousands of advertisers, encompassing more than 5 million individual ads across six major industries. The findings challenge the notion that AI Overviews are a uniform presence. Instead, the data reveals a fragmented reality where the impact on Pay-Per-Click (PPC) revenue is dictated by industry-specific intent, device type, and the nature of the content the AI generates. The Financial Mechanics: How AI Overviews Reshape PPC Math For years, the formula for a successful paid search campaign was relatively straightforward: bid on the right keywords, optimize your ad copy for high click-through rates (CTR), and ensure your landing page converts. However, the introduction of AI Overviews has introduced a new variable that can disrupt this entire chain of events. When a generative summary appears at the top of the SERP, it often pushes traditional paid ads “below the fold,” particularly on mobile devices. This displacement triggers a negative chain reaction for advertisers. First, reduced visibility leads to lower CTRs. When users find their answers within the AI summary itself, the motivation to click on an external link—even a highly relevant ad—diminishes. Second, fewer clicks naturally lead to a decline in total conversion volume. Even if your conversion rate remains steady, the shrinking traffic pipeline reduces the absolute number of leads or sales generated. Perhaps most concerning is the impact on Cost Per Click (CPC). In sectors where AI Overviews trigger on high-competition terms, the remaining ad slots become more valuable. Advertisers often find themselves paying a “visibility tax,” where CPCs rise as they fight for the limited space left above or immediately following the AI summary. For many brands, this squeeze on both ends—lower traffic and higher costs—results in a significantly reduced Return on Ad Spend (ROAS). Industry Deep-Dive: A Fragmented Landscape The Adthena data highlights that Google’s AI does not treat all industries equally. The frequency and style of AI Overviews vary significantly depending on the sector, creating unique challenges and opportunities for different types of businesses. Technology and Telecom: The Battle for the Mid-Funnel The Technology and Telecommunications sectors are currently the most affected by the rise of AI Overviews. In these industries, Google frequently uses AI to generate comparison-style content. When a user searches for “best cloud storage” or “top 5G data plans,” the AI Overview often provides a side-by-side analysis of features and pricing. This directly threatens the mid-funnel research phase where advertisers usually capture high-intent traffic. Because the AI satisfies the user’s need for comparison, the user may never feel the need to visit an advertiser’s website to do their own research. Consequently, these industries show consistently lower CTRs when an AI Overview is present compared to standard search results. Healthcare and Financial Services: The Informational Filter The impact on Healthcare and Financial Services looks quite different. In Healthcare, Adthena found that 74% of AI Overviews are dominated by “News” or general medical information. In Financial Services, 54% of the content is structured as FAQs. These summaries often act as “intent filters.” For these sectors, AI Overviews may actually provide a hidden benefit by satisfying low-intent or “top-of-funnel” queries. If a user is looking for a general definition of a medical term or a basic explanation of an interest rate, the AI answers it. This prevents the advertiser from wasting ad spend on a user who was never likely to convert in the first place. However, for high-value keywords in Finance, even modest CPC increases can be devastating to profitability due to the already high baseline costs in that industry. Retail and Automotive: Consumer Resilience Retail and Automotive show a surprising level of resilience. Despite the presence of AI summaries, users in these categories seem more inclined to bypass the AI to find verified brand information or specific product listings. In Retail, the gap between CTR with and without AI Overviews is narrower than in Technology. This suggests that while Google can summarize features, it cannot replace the brand trust or the specific shopping experience consumers seek when they are ready to make a purchase. Content Themes: Where AI Competes and Where It Retreats Understanding the “theme” of the AI content is critical for PPC strategy. Adthena’s research categorized the types of information being surfaced by Google’s generative engine, revealing clear patterns in where the AI is most active. Comparison and Instructional Conflict Comparison content is the primary battleground. This is where Google’s AI is most aggressive, attempting to synthesize data from multiple sources to provide a definitive “best of” list. For advertisers who rely on comparison-based traffic, the AI Overview is a direct competitor for the click. The Problem-Solving “Safe Harbor” One of the most interesting findings in the Adthena report is the “opportunity gap” in problem-solving and troubleshooting content. Currently, problem-solving themes account for only 0-2% of AI Overviews across the six industries studied. This suggests that for complex “how-to” queries or specific technical troubleshooting, Google’s AI is not yet confident enough to provide a primary summary. For now, this remains a safe harbor for advertisers to capture traffic without significant AI interference. The Device Disparity: Desktop Saturation vs. Mobile Displacement The impact of AI Overviews is also heavily dependent on the device the consumer is using. Adthena’s data reveals a striking divergence between desktop and mobile

Uncategorized

Information Retrieval Part 3: Vectorization And Transformers (Not The Film)

The Evolution of Search: From Keywords to Context The landscape of information retrieval (IR) has undergone a tectonic shift over the last decade. In the early days of the internet, search engines functioned much like a digital library card catalog. They relied on exact string matching, looking for the specific sequence of letters you typed into a search bar. If you searched for “running shoes,” the engine looked for documents containing those exact words. If a high-quality page used the term “jogging footwear” instead, you might never find it. This was the era of sparse retrieval, dominated by simple frequency counts and keyword density. Today, we find ourselves in the midst of a semantic revolution. Modern information retrieval is no longer about matching characters; it is about understanding concepts, intent, and relationships. This transition has been fueled by two major breakthroughs: Vectorization and Transformers. These technologies allow machines to “read” and “understand” text in a way that mimics human cognition, albeit through the lens of complex mathematics. For SEO professionals, digital marketers, and tech enthusiasts, understanding these concepts is no longer optional—it is the key to navigating the future of AI-driven search. What is Vectorization in Information Retrieval? To understand how a computer processes language, we must first accept that computers are inherently bad at understanding words but exceptionally good at processing numbers. Vectorization is the process of converting text—whether it is a single word, a sentence, or an entire document—into a numerical format that a machine can manipulate. These numerical representations are called “vectors.” In simple terms, a vector is a list of numbers that represents a point in a multi-dimensional space. In the context of NLP (Natural Language Processing), these dimensions represent different features or “meanings” of the text. While a human might describe a “cat” as a small, furry, carnivorous mammal, a vectorization model might represent “cat” as a series of coordinates like [0.12, -0.56, 0.89, …]. The Move from One-Hot Encoding to Embeddings Early attempts at vectorization used a method called “one-hot encoding.” In this system, every unique word in a vocabulary was given its own dimension. If your vocabulary had 10,000 words, each word was a vector of 10,000 numbers, where all were zero except for the one position assigned to that specific word. This was highly inefficient and, more importantly, it failed to capture any relationship between words. To a one-hot encoder, the word “dog” was just as different from “puppy” as it was from “refrigerator.” Modern information retrieval utilizes “word embeddings.” Unlike one-hot encoding, embeddings are “dense” vectors. They use a fixed number of dimensions (often 300, 768, or more) to represent words. Words that share similar meanings or appear in similar contexts are placed closer together in this multi-dimensional vector space. This allows a search engine to mathematically determine that “king” and “queen” are related, or that “walking” and “ran” are different forms of the same concept. The Geometry of Meaning: Vector Space Models When we represent documents and queries as vectors, we create what is known as a Vector Space Model (VSM). In this model, the “relevance” of a document to a search query is determined by its physical proximity in the vector space. This is a fundamental departure from traditional Boolean search. In a VSM, search is essentially a geometry problem. When a user enters a query, the search engine converts that query into a vector. It then looks for document vectors that are located near the query vector. This approach allows search engines to identify relevant content even if the document doesn’t contain the exact words used in the query. This is the foundation of semantic search. Understanding Cosine Similarity How do we actually measure the “closeness” of two vectors? While there are several methods, the most common in information retrieval is Cosine Similarity. Instead of measuring the straight-line distance (Euclidean distance) between two points, Cosine Similarity measures the cosine of the angle between two vectors. Why use the angle? In text analysis, the length of a document can skew Euclidean distance. A very long article about “SEO” and a short tweet about “SEO” might be far apart in space simply because the long article has more words (larger magnitude). However, the direction of their vectors—representing their topic—will be very similar. Cosine Similarity produces a score between -1 and 1: 1: The vectors are identical in direction (highly relevant). 0: The vectors are orthogonal (no relationship). -1: The vectors are diametrically opposed (opposite meanings). For SEOs, this means that the “topical authority” of a page is mathematically calculated based on how closely its content vector aligns with the intent vector of a user’s search query. Transformers: The Engine of Modern NLP While vectorization provided the “map” for search engines, Transformers provided the “intelligence” to read it. Introduced by Google researchers in the 2017 paper “Attention Is All You Need,” the Transformer architecture revolutionized how machines process sequences of data, particularly text. Before Transformers, models like RNNs (Recurrent Neural Networks) and LSTMs (Long Short-Term Memory) processed text word-by-word, from left to right. This was slow and often resulted in the model “forgetting” the beginning of a sentence by the time it reached the end. Transformers changed this by using a mechanism called “Self-Attention.” The Power of Self-Attention Self-attention allows a model to look at every word in a sentence simultaneously and determine which other words are most important for understanding its meaning. It essentially weights the relationships between words regardless of their distance from each other. Consider the sentence: “The bank was closed because of the river flooding.” When a Transformer processes the word “bank,” the attention mechanism links it heavily to “river” and “flooding,” allowing the model to understand that we are talking about a geographical feature, not a financial institution. In a different sentence—”The bank was closed because it was Sunday”—the model would link “bank” to “Sunday,” correctly identifying it as a business. This contextual awareness is what makes modern search feel so much more intuitive. BERT

Scroll to Top