Author name: aftabkhannewemail@gmail.com

Uncategorized

When Platforms Say ‘Don’t Optimize,’ Smart Teams Run Experiments via @sejournal, @DuaneForrester

The Unspoken Mandate: Why Digital Publishers Must Experiment Even When Algorithms Tell Them Not To In the complex, ever-shifting world of digital publishing and search engine optimization (SEO), a constant tension exists between the directives issued by major platforms and the competitive necessity of maximizing content visibility. Search engines, social media giants, and now, large language model (LLM) platforms often issue a stern warning: “Just create great content; don’t try to optimize for the algorithm.” While this advice sounds noble and user-centric on the surface, smart digital teams know that true survival and growth require a deep, data-driven understanding of how algorithms select, process, and ultimately present information. The rise of generative AI and powerful LLMs has made this understanding not just helpful, but absolutely critical. When platforms assure us the system is too complex to optimize, skilled practitioners, guided by research into AI mechanics, choose instead to run rigorous experiments. This strategic approach is highly relevant today, particularly following recent research exploring the specific mechanisms LLMs use to select and prioritize content. Digital strategist and thought leader Duane Forrester has synthesized these findings into a practical, actionable framework, providing publishers and SEO professionals with a roadmap to validate LLM preference signals in real-world scenarios. The Algorithmic Shift: From Keywords to Conversational AI For decades, optimization primarily revolved around predicting the ranking signals of traditional search engines—focusing on links, keyword density, technical site health, and topical relevance. While these elements remain crucial, the integration of advanced machine learning models, and specifically Large Language Models, has fundamentally changed how content is consumed by the system. Today, LLMs are not just ranking pages; they are interpreting, summarizing, synthesizing, and generating completely new responses based on a vast corpus of training data and real-time indexed content. This shift introduces entirely new optimization challenges and opportunities that traditional SEO guidelines often overlook or fail to address. When a platform provides a generative answer—whether it’s a Search Generative Experience (SGE) summary or a conversational chatbot response—it is performing an intensive content selection process. This process often bypasses the standard “ten blue links” structure, forcing publishers to compete for visibility within a synthesized, abstracted answer. Understanding the input preferences of the underlying LLM becomes the competitive differentiator. The Paradox of Platform Optimization Directives Why do major platforms—whether Google, Meta, or an emerging AI provider—so frequently advise against explicit optimization? There are several compelling reasons rooted in maintaining system health and user experience: Maintaining Integrity and Preventing Manipulation The primary goal of any platform is to deliver high-quality, relevant results to its users. Optimization, when executed poorly or maliciously, transforms into spam, low-quality content, or manipulative tactics designed only to trick the algorithm. Platforms want to discourage “black hat” methods that pollute the index and degrade the user experience. By issuing generic warnings, they encourage creators to focus on inherent quality. The Complexity Defense As algorithms have matured, they have become incredibly complex, incorporating hundreds or thousands of nuanced signals. For practical purposes, it is often easier for platforms to state that the system is unoptimizable than to maintain comprehensive documentation on every subtle signal and weighting factor. This opacity also protects the intellectual property embedded within the proprietary ranking models. The Market Survival Mandate For digital publishers and marketers, however, relying solely on the hope that “great content” will be discovered is a recipe for competitive failure. While quality is foundational, placement and visibility drive revenue. Savvy teams recognize that every algorithm, no matter how complex, operates on predictable mathematical principles that generate measurable preferences. If a team can scientifically test which content structures, semantic patterns, or data formats are preferentially selected by an LLM, they gain a legitimate and critical market advantage. This is not manipulation; it is advanced digital physics. New Research: Decoding LLM Content Selection The impetus for this new wave of experimentation stems from academic and industry research scrutinizing how LLMs prioritize different inputs when synthesizing information. These studies reveal several key areas where LLMs exhibit measurable, even exploitable, preferences: Semantic Density and Clarity Unlike early search algorithms that valued keyword quantity, LLMs appear to prioritize content that is semantically dense, highly focused, and unambiguous. An LLM works most efficiently when it can quickly identify key entities, relationships, and verifiable facts within a text block. Content that is verbose, vague, or riddled with filler language is harder for the model to process quickly and is therefore less likely to be chosen as the source for a summarized answer. Structural and Positional Bias Certain research suggests that LLMs, during training and real-time processing, may exhibit positional or structural biases similar to those observed in traditional search. For instance, specific structural elements (e.g., bulleted lists, well-formatted tables, dedicated summary blocks) might be preferentially weighted because they resemble the optimal formats the model was trained on to extract facts. If a key fact is buried halfway down a 3,000-word essay, an LLM might struggle to extract it efficiently compared to the same fact presented clearly in a dedicated “Key Takeaways” section. The Preference for Verifiability LLMs thrive on factual accuracy and verification. Content that explicitly cites sources, uses structured data (like Schema Markup), and demonstrates clear authority (E-E-A-T signals) is more likely to be deemed trustworthy by the model. When synthesizing an answer, an LLM prioritizes content that reduces its own risk of generating a “hallucination” or an incorrect response. Duane Forrester’s Framework: Turning Research into Action Understanding these theoretical LLM preferences is only the first step. The crucial move is to translate theory into a practical, repeatable process for validation. Duane Forrester, recognized for his deep expertise in search strategy and algorithmic transparency, emphasizes the need for teams to establish a controlled framework for running real-world experiments. His approach is built on the philosophy that platform warnings are not legal prohibitions, but signals that require a sophisticated testing mindset. If an LLM is a black box, the only way to understand its internal mechanisms is through careful observation of its outputs when inputs

Uncategorized

Is your account ready for Google AI Max? A pre-test checklist

The New Frontier of Search: Understanding Google AI Max Google AI Max represents one of the most significant shifts in paid search advertising since the introduction of Performance Max (PMax). It is Google’s latest evolution toward a system that relies less on manually selected keywords and more on sophisticated machine learning and user signals to find valuable conversion opportunities. AI Max is fundamentally Google’s foray into semi-keywordless targeting within the search environment. While advertisers must still provide “seed” keywords to give the system a starting point, AI Max goes far beyond standard matching logic. It leverages an expanded array of signals—including user intent, past browsing behavior, location, and the contextual relevance of the landing page—to determine when and how to display an ad to a searcher. The promise of AI Max is conversion expansion. For accounts that are already highly optimized and maximizing performance on their core keywords, AI Max offers a pathway to tap into previously undiscovered customer segments. However, this power comes with considerable risk. If an account lacks proper optimization, data integrity, or a proven history of using Google’s automated tools effectively, enabling AI Max can quickly become a significant financial drain. Before committing budget to this powerful new tool, a rigorous pre-test audit is essential. This checklist details the critical foundational requirements and strategic decisions necessary to ensure your account is truly ready for the complexities and potential rewards of AI Max. AI Max vs. AI Overviews: Clarifying a Key Misconception A common rumor circulating in the digital advertising community suggests that using AI Max is mandatory for ads to appear within Google’s new AI Overviews (formerly known as Search Generative Experience or SGE). This is inaccurate. Advertisers do *not* need to enable AI Max merely to show up in the AI Overview spaces. Standard broad match keywords, used within conventional Search campaigns, are capable of triggering ads in these generative results. AI Max should be viewed strictly as a conversion expansion tool designed to find high-intent audiences beyond your existing keyword coverage, not solely as a gatekeeper for AI-driven ad placements. Establishing the Foundation: Core Requirements Before Enabling AI Max Implementing AI Max successfully depends entirely on the stability and accuracy of the data infrastructure within your Google Ads account. Machine learning models, no matter how advanced, rely on accurate feedback loops. Pristine Conversion Tracking and Attribution The single most critical requirement before testing AI Max is ensuring flawless conversion tracking. AI Max is an optimization engine; it optimizes precisely toward what you define as success. If your conversion data is flawed, the AI will learn the wrong lessons and make poor investment decisions. Your tracking setup must be: * **Accurate:** Ensure all valuable business outcomes (purchases, leads, calls) are being correctly recorded. * **Deduplicated:** If you are using Google Ads, Google Analytics, or third-party CRM data, ensure there is no double-counting of conversions. Inflated conversion numbers lead the AI to believe performance is better than it actually is, causing overspending. * **Focused on Business Outcomes:** Conversion actions must be weighted based on their true value (e.g., using conversion values for e-commerce or differing values for high-intent versus low-intent leads). AI Max will prioritize actions with higher defined values. If you are not tracking conversion value, or if you are tracking low-value interactions (like simple page views) as primary conversions, the system will allocate budget inefficiently. If your data is unreliable, AI Max will be working from inaccurate historical performance, guaranteeing poor results and high CPAs. Mandate for Automated, Conversion-Focused Bidding AI Max requires the sophistication of automated bidding strategies to function effectively. Because it expands targeting significantly beyond your manually selected keywords, only automated bidding can process the massive influx of real-time signals and set appropriate bids for each unique auction. The compatible conversion-focused strategies include: * **Maximize Conversions:** Aims to get the most conversions within a given budget. * **Maximize Conversion Value:** Aims to maximize the total return (revenue) within a given budget. * **Target CPA (tCPA):** Aims to achieve a specific cost-per-acquisition goal. * **Target ROAS (tROAS):** Aims to achieve a specific return on ad spend goal. Target Strategies Offer Greater Predictability Based on extensive testing, AI Max operates with far greater predictability when paired with *Target* strategies (tCPA or tROAS). These strategies provide guardrails, instructing the AI not just to find conversions, but to find conversions that meet a specific efficiency metric. Conversely, the *Maximize* options (Maximize Conversions or Maximize Conversion Value) are designed to spend the full budget to achieve the highest possible volume, regardless of the marginal cost of the last few conversions. When coupled with the expansive targeting of AI Max, this can often lead to rapid budget depletion on high-cost conversions, resulting in exceptionally high CPAs or very low ROAS figures. If you choose a “Maximize” strategy with AI Max, mandatory, frequent monitoring of performance metrics and budget pacing is required. Analyzing Necessary Conversion Volume Machine learning models require data to learn. Without a sufficient and steady volume of conversions, AI Max cannot effectively train itself, leading to erratic and unpredictable spending. Technically, Google allows AI Max to be enabled on any campaign, even those with zero conversions. However, practical experience dictates clear minimums: * **Under 30 Conversions Per Month:** Performance is typically highly erratic. The model lacks the data needed to make consistent, informed bidding decisions across the vast potential keyword landscape AI Max opens up. * **Over 100 Conversions Per Month:** Campaigns that consistently generate over 100 conversions per month tend to perform better, provided there is a history of broad match success. This high volume gives the AI engine the critical mass of data needed to stabilize performance and execute accurate segmentation. To introduce AI Max into your account safely, begin with high-volume, non-brand campaigns. These campaigns have the data necessary to train the AI quickly and present the greatest opportunity for expanding market reach. Eliminating Budget Constraints AI Max is designed for expansion, meaning it requires financial headroom. If your campaigns are

Uncategorized

Yahoo debuts Scout, an AI search and companion experience

The Dawn of a New Search Era: Introducing Yahoo Scout In a significant move demonstrating its renewed commitment to core digital services, Yahoo has officially debuted the first iteration of its sophisticated, AI-powered answer engine and companion: Yahoo Scout. Launched today, Scout represents more than just a chatbot; it is Yahoo’s comprehensive strategy for integrating generative AI directly into the fabric of its massive digital network, offering users a personalized, guided experience. Yahoo Scout is immediately available for public use at scout.yahoo.com. Crucially, its functionality is not confined to a standalone website. Yahoo has seamlessly embedded Scout’s intelligence across its most critical properties, including Yahoo News, Yahoo Finance, Yahoo Mail, and Yahoo Search. This deep integration positions Scout as a true AI companion designed to guide and assist users directly within the platforms they rely on daily. Defining Yahoo Scout: An AI Search Engine with Personality Yahoo Scout is positioned as Yahoo’s distinct entry into the competitive field of generative AI search, placing it alongside major players like Google’s AI Mode and tools such as OpenAI’s ChatGPT. However, Yahoo’s approach emphasizes personality and accessibility, aiming to make the advanced technology relatable and easy to use for a broad audience. Yahoo has focused heavily on giving Scout a genuine, engaging personality. The goal, according to Yahoo, is to create an experience that feels friendly, fun, and intuitively understandable for people of all ages. This focus on user experience is evident from the moment a user lands on the homepage. Key Features of the Scout Interface Upon visiting Yahoo Scout, users encounter a playful yet organized interface. The experience begins with: Engaging Visuals: The homepage greets users with an animated icon and a distinctive, catchy slogan. These icons are dynamic and change, featuring items like a cowboy hat, a walking cartoon brain, a gold medal, or a crystal ball, lending a sense of whimsy and approachability to the technology. Central Search Box: A prominent search box serves as the main entry point for queries. Categorized Suggested Searches: Below the query field, Yahoo offers filtered suggestions, allowing users to instantly narrow their search focus across topics like finance, sports, news, shopping, and travel. This structured approach helps guide user intent from the outset. Query History: A feature on the left side of the screen displays past queries, ensuring continuity by allowing users to effortlessly jump back into previous research or conversation threads. The entire aesthetic of Scout reflects Yahoo’s ambition to stand out in a field often characterized by minimalist design, proving that advanced AI functionality can coexist with a vibrant, inviting brand identity. Yahoo’s Competitive Edge: Leveraging Massive Data Assets In the highly competitive arena of AI, proprietary data and user knowledge are the most valuable assets. Yahoo holds a significant advantage over many emerging AI search rivals due to its established, massive global footprint. This historical presence in email, news, and search provides an unprecedented wealth of behavioral data and user signals that directly inform Scout’s capabilities. Yahoo currently boasts: Over 500 million detailed user profiles. More than one billion knowledge-graph entities, providing a structured understanding of real-world facts and relationships. Tracking of 18 trillion consumer events and signals across its comprehensive network of properties. This immense reservoir of deep data regarding user behavior, intent, and query patterns allows Yahoo to tailor and personalize AI-driven search experiences far more accurately than generic large language models (LLMs). By grounding its AI in these specific consumer signals, Yahoo Scout aims to deliver guidance that is not only accurate but also highly relevant to the individual user’s context. It is important to note the scale of Yahoo’s digital reach. The company currently ranks as the second largest email service provider globally and the third largest search engine, underscoring the massive built-in audience ready to adopt and test the new Scout capabilities. Scout’s Rich Content Integration A major functional benefit of Scout operating within the Yahoo ecosystem is its ability to seamlessly pull rich, structured content directly into its generative responses. When querying Scout, users can expect integrated features such as: Real-time Yahoo Finance widgets and detailed financial data. Automatically generated tables and charts for quantitative information (like stock performance or weather). Embedded citations, relevant news articles, and local weather forecasts. This deep integration ensures that Scout’s output is not just summarized text but a multimedia answer, combining generative insights with authoritative, first-party data. A Guiding Philosophy: Serving the Open Web and Publishers One of the most notable aspects of Yahoo Scout’s design is its core philosophy regarding the relationship between generative AI and content creators. Jim Lanzone, CEO of Yahoo, emphasized that Scout is fundamentally tied to Yahoo’s original mission: acting as a trusted guide to the internet. Crucially, the platform was built from the ground up to support the open web by actively directing traffic back to content creators and publishers. Prioritizing Downstream Traffic Early iterations of AI search engines faced significant criticism for consuming content and providing comprehensive answers without adequately attributing or rewarding the original sources, leading to concerns about reduced publisher traffic. Yahoo Scout aims to set a new standard for ethical AI content sourcing. As Lanzone pointed out, relying solely on licensing deals with AI companies is not a sustainable revenue model for every publisher. The historical model of sending referral traffic back to the source remains the most viable pathway for supporting a healthy open web ecosystem. Yahoo Scout implements several features to ensure that publishers benefit from its generative answers: Clear, Clickable Highlights: Scout responses feature prominent, wide blue highlights across the generated text. When a user hovers over these sections, the source appears, providing an immediate path to click through to the original content provider. Featured Source Placement: Every response includes an easy-to-spot “featured source,” often accompanied by a “Read more” prompt, explicitly encouraging the user to visit the source article. Enhanced Visual Citations: Scout further emphasizes source content by including tables, imagery, and relevant news articles throughout its answers, making the citation process highly

Uncategorized

4 Facebook ad templates that still work in 2026 (with real examples)

The Myth of Viral Inspiration and the Reality of Repeatable Success In the high-speed world of digital marketing, especially on platforms like Facebook and Instagram, the pressure to produce wildly original and uniquely “viral” content can be exhausting. Many marketers dedicate valuable time scrolling through their feeds, desperately searching for the next big creative breakthrough. However, this quest for novelty often overlooks a fundamental truth of performance marketing. The secret to high-performing advertisements in 2026 isn’t about being groundbreaking; it’s about being predictable, effective, and rooted in psychological principles that have driven commerce for decades. Even with the introduction of sophisticated AI creative tools and shifting consumer behavior, the most successful Facebook ads rely on the same repeatable, proven templates. Why chase fleeting trends when you can master structures that consistently deliver results? We are moving past the era of pure, unbridled inspiration and focusing instead on strategic deployment. This article cuts through the noise of modern “creative strategy” buzzwords to highlight four fundamental Facebook ad templates that continue to drive conversion and scale businesses, complete with tangible examples from top brands. The Enduring Power of Ad Templates in a Data-Driven Era The digital advertising landscape today is characterized by fierce competition, rising costs, and complex attribution challenges following privacy changes. In this environment, stability and clarity are invaluable assets. Ad templates provide the necessary framework to maintain message clarity and minimize decision fatigue—both for the customer and the creative team. In 2026, where AI often handles image generation and audience targeting, human marketers must focus on the psychological structure of the message. Templates allow for rapid A/B testing, ensuring that you are only varying one or two elements (e.g., the specific pain point or the call-to-action) rather than redesigning the entire creative from scratch. This systematic approach is essential for optimizing campaign efficiency. 1. Problem? Meet Solution: Advertising 101 Pain Point → Relief → Simple Next Step This is arguably the most resilient template in the history of advertising. Its enduring success stems from its alignment with basic human motivation. People don’t purchase products or services because they love your brand; they purchase solutions to problems they are actively experiencing. This model ensures you meet the customer precisely where their need is greatest. Understanding the customer journey starts not with product features, but with their inner monologue. Most customers wake up thinking about their daily frustrations: “I’m constantly wasting time on repetitive tasks.” “I feel stuck and need a path forward.” “I spent too much money last month.” “I can’t stay consistent with my goals.” An effective problem-solution ad validates these internal struggles. If a customer doesn’t recognize that their situation is solvable, they will never look for an answer. Your role is to first identify and articulate that problem better than they can, and then immediately introduce your product as the natural, logical answer. Example: ClickUp ClickUp, operating in the highly competitive project management software space, doesn’t waste time detailing every feature. Instead, their strategy focuses on a modern, acute pain point common among tech professionals: the fragmentation of workflow across too many tools and apps. The ad reframes the user experience: Stop switching between platforms and transition to one unified system. They are not selling software; they are selling a deeper value proposition that resonates on an emotional level. This includes: Mental Relief: Reducing cognitive load and organizational anxiety. A Single Source of Truth: Centralizing information eliminates searching and guesswork. Increased Productivity: Less context switching translates directly to time savings. The Promise of Control: Restoring order to a chaotic work environment. By defining the solution in terms of emotional benefit rather than just functionality, ClickUp ensures maximum relevance in a busy feed. For Meta Ads focused on lead generation, this template is unparalleled because it immediately qualifies the audience—only those experiencing the stated problem will engage. (Dig deeper: Meta Ads for lead gen: What you need to know) Plug-and-play copy starter: Still dealing with [specific, relatable problem]? You’re not alone – and you don’t have to stay stuck. [Product/service] helps you [key emotional benefit] without [common objection or difficulty]. Get started → [CTA] 2. Can Your Competitors Do This? The Power of Differentiation Unique Selling Point → Instant Comparison → ‘Oh, Hey’ Moment In 2026, most industries are saturated. Whether you sell specialized SaaS, consumer packaged goods, or online courses, you are constantly fighting for market share. The competitive comparison template works by making the choice incredibly simple for the consumer: why should I pick you over the dozens of alternatives? A common mistake is believing you need a radical innovation to employ this template effectively. That’s rarely the case. Differentiation can often be found in your process, your priority, or your target audience. All that matters is that your difference is valuable and easily understood by a scrolling customer. This template demands that you clearly articulate your Unique Selling Proposition (USP) and use it as a point of comparison, even if the competitor is never named directly. The goal is to create an “Oh, hey” moment where the customer recognizes that your offering solves a critical secondary pain point that the standard solution ignores. Example: The Woobles The craft market is ancient. Crocheting kits and patterns have been available forever. Yet, The Woobles achieved significant market penetration by applying modern user experience design principles to an old hobby. Their success is a perfect demonstration of the power of the differentiation template. The ad doesn’t just say, “Buy our kit.” It positions their product against the historical difficulty of learning crochet. Traditional kits often intimidate beginners, leading to frustration and abandoned projects. The Woobles stacked their differentiators to overcome these objections, making the purchase feel risk-free and inevitable: Cute, Modern Projects: Appealing designs that motivate modern consumers. Designed for True Beginners: Focusing solely on the new learner demographic. Ergonomic Tools: Thicker yarn and a chunky hook simplify the tricky initial steps. Step-by-Step Video Tutorials: Removing the ambiguity found in written patterns. Their USP isn’t just that

Uncategorized

Why Search and Shopping ads stop scaling without demand

The Search Engine Marketing Paradox: When Optimization Isn’t Enough If you spend any significant time immersed in the world of performance marketing—whether reading PPC forums, debating in industry Slack groups, or fielding questions at digital conferences—you’ve undoubtedly encountered the recurring, frustrating question: “Why are my Google Ads stuck? I’m optimizing everything, but growth has completely plateaued.” On the surface, everything seems to be running smoothly. Budgets are healthy, the shopping feed is meticulously clean, keyword bid strategies are refined, and impression share (IS) metrics look robust. Yet, month over month, the needle barely moves. The common impulse is to blame the algorithm, the competition, or a technical glitch. However, the reality is often much simpler, and far more uncomfortable: your growth isn’t stalling because your campaigns are broken; it’s stalling because you have reached the upper limit of *existing market demand*. In highly specialized niche markets, or categories governed by strong seasonality and limited audience size, growth is naturally capped. While adopting broad match targeting or leveraging AI-driven systems like Performance Max (PMax) can certainly stretch your reach to adjacent and related queries, these tactics only capture intent that *already exists*. Once you have thoroughly covered the available pool of relevant commercial searches, no amount of bidding optimization can conjure new prospects out of thin air. This is the essential, often overlooked truth of paid search and shopping advertising: Google Ads does not create demand—it captures it. If the volume of people searching for your product or solution is finite, your scaling potential is equally constrained. When growth stagnates, the critical strategic pivot isn’t to ask, “What technical setting is wrong in Google Ads?” but rather, “What are we doing upstream to generate new market demand that will eventually fuel future searches?” Search and Shopping: Demand Capture, Not Demand Creation To truly understand the ceiling on paid search growth, marketers must be crystal clear about the fundamental nature of channels like Google Search and Shopping. They are, by design, *reactive* channels. These platforms excel at positioning your product or service directly in front of highly qualified individuals who are actively researching a solution or ready to make a purchase. They are the ideal closing mechanism. Crucially, however, ads only appear when someone initiates a query. No search query means no ad impression. The Illusion of High Impression Share One of the most deceptive metrics in the scaling discussion is Impression Share (IS). Achieving 90% IS feels like a major victory—and in terms of competitive presence, it is. It suggests you are winning nearly every auction relevant to your current keyword set. But this metric is only measured against the total number of searches *that occurred*. If your highly relevant market generates only 5,000 commercial searches this month, reaching 90% IS means you captured visibility for 4,500 of them. You cannot suddenly scale that to 50,000 impressions next month simply by raising your budget or improving your Quality Score. The market size dictates the limit. While modern tools like broad match or AI Max campaigns (including Performance Max) are powerful for increasing coverage, they are fundamentally tethered to user intent. They expand coverage by finding adjacent, related, or predicted intent signals. If the public isn’t searching for related terms, or if your category has low overall public awareness, there is nothing for the algorithm to match against. This contrasts sharply with proactive platforms like Meta (Facebook/Instagram), TikTok, YouTube, and traditional Display networks. On those platforms, increasing your budget directly correlates to increasing reach and frequency—you can literally buy more eyeballs and drive initial awareness, thereby *creating* the intent that Search will later capture. Search, conversely, operates as a high-intent closer, not a broad awareness generator. The Constraints of Niche Markets and Seasonality Scaling issues are often most acute in specialized or niche markets where the Total Addressable Market (TAM) of searchers is inherently small. For instance, a vendor selling proprietary industrial solvents might easily reach 95% IS, not because they are perfectly optimized, but because only a few hundred engineers globally are searching for those exact terms monthly. Similarly, businesses driven by seasonality—such as tax preparation software, holiday retail goods, or seasonal tourism—will see their scaling potential expand and contract strictly according to the calendar. You cannot force peak season search volumes in July if your business is focused on Black Friday or Christmas shopping. Recognizing and respecting these market limitations is the first step toward building a sustainable, realistic growth strategy. Mapping the Origins of Demand: The Full-Funnel Framework If Search and Shopping are the destination channels, marketers must systematically invest in the upstream channels that serve as the fuel line. We can categorize these demand-generating activities using the classic, highly relevant framework of Owned, Earned, and Paid media. Owned Media: Nurturing and Capturing Internal Demand Owned channels are the assets you fully control—your website, email list, blog content, and CRM database. While owned media rarely sparks *brand-new* demand for an unaware prospect, it is absolutely essential for nurturing existing curiosity and steering prospects toward a high-intent search action. * **Email Marketing and CRM:** A D2C retailer, for example, might run a simple “VIP early access” campaign via Meta or lead-gen ads to build a pre-sale email list. When the sale officially launches, that email blast directly fuels a spike in branded searches (“Brand X Black Friday deals”). * **SEO and Content Marketing:** A B2B SaaS company that publishes detailed, helpful FAQ guides or technical comparisons serves a critical function in the early research phase. A prospect who finds this content organically might not buy immediately, but when they are ready to convert, they are far more likely to Google the brand name directly, leading to a cheap, high-converting branded search click. Owned channels provide the structure to ensure that once curiosity is sparked (by Earned or Paid efforts), it is efficiently channeled toward conversion-ready intent. Earned Media: Building Trust and Credibility Earned media encompasses the visibility you don’t directly pay for: PR coverage, positive reviews, organic social media

Uncategorized

EU puts Google’s AI and search data under DMA spotlight

The Shift from Regulatory Theory to Execution The European Union has moved definitively into the execution phase of its landmark Digital Markets Act (DMA), signaling that theoretical compliance is no longer sufficient for designated “gatekeepers.” The European Commission recently launched two formal “specification proceedings” targeting Google. These proceedings are designed not merely to audit compliance, but to formally define the technical and operational mandates Google must implement to ensure fair competition in two critical areas: mobile artificial intelligence (AI) integration and the sharing of proprietary search data. This strategic escalation by the European Commission underscores a commitment to reshape the digital landscape. By focusing the DMA’s power on Google’s dominant platforms—Android and Google Search—regulators aim to limit the enormous competitive advantages the tech giant extracts from its own ecosystem. For digital publishers, competing search engines, and the vast SEO community, these developments could herald a fundamental realignment of platform reliance and data availability. Decoding the Formal Specification Proceedings When the Digital Markets Act came into force, it laid out broad obligations for companies designated as gatekeepers—firms that control essential core platform services and wield significant market power. Google, recognized as a gatekeeper for services including Search, Android, Chrome, YouTube, Maps, Shopping, and online advertising, has been required to comply with these obligations since March 2024. However, many DMA requirements are framed broadly. For instance, the DMA mandates that gatekeepers must ensure rival services can interoperate effectively. Defining what “effective interoperability” means for a complex, closed operating system like Android, or how confidential search data can be shared in an “anonymised” and “non-discriminatory” way, requires precise regulatory guidance. This is where the formal specification proceedings come into play. What is a Specification Proceeding? A specification proceeding is the regulatory tool the European Commission uses to translate general DMA requirements into structured, technical, and enforceable mandates. Instead of waiting for potential infringements, the Commission proactively defines the exact terms of compliance. These structured dialogues force the gatekeeper (in this case, Google) to clearly demonstrate how they plan to achieve compliance, under the direct scrutiny of the EU regulators. It transforms ongoing regulatory dialogue into a time-bound, defined process with specific outcomes that must be adhered to, ensuring that the spirit of the DMA is met, not just the letter. The Six-Month Timeline for Compliance The Commission has established a rapid timeline for these proceedings, reflecting the urgency of addressing competitive imbalances in fast-moving sectors like AI. Within three months of opening the formal process, the Commission is set to send Google its preliminary findings and proposed measures. This early intervention allows regulators to test Google’s initial proposals and provide feedback swiftly. The full proceedings are slated to conclude within six months. Upon conclusion, non-confidential summaries of the findings and the mandated technical requirements will be published. This publication allows third parties—including competing search engines, AI developers, and industry stakeholders—to weigh in on the effectiveness and fairness of the compliance measures, adding a layer of public oversight to the enforcement process. Focus Area 1: Unlocking Android for AI Interoperability The first specification proceeding centers squarely on the future of mobile AI and the deep integration capabilities within the Android ecosystem. Regulators are examining how Google must grant third-party developers free and effective access to the crucial Android hardware and software features currently utilized by Google’s own first-party AI services, such as Gemini. The Challenge of Deep Integration AI assistants require deep integration to function seamlessly across a mobile device. They need access to notification controls, sensitive microphone and camera APIs, biometric data, and core system settings to provide contextually relevant and instantaneous responses. Historically, Google’s first-party tools have enjoyed a privileged status, often bypassing the standard sandbox restrictions placed on third-party apps. The goal of this EU mandate is radical parity. The Commission aims to ensure that rival AI providers can integrate just as deeply into Android devices as Google’s proprietary services. This addresses the significant competitive barrier Google holds by controlling both the operating system (Android) and the dominant mobile AI assistant (Gemini). If successful, users should theoretically be able to swap out Gemini for a competing AI assistant—say, one powered by a European startup—and experience the same level of functionality and system access. Impact on Third-Party AI Developers For independent software vendors (ISVs) and rival AI labs, the stakes are enormous. If the Commission successfully mandates open, non-discriminatory access to core Android features, it could fundamentally accelerate competition in the nascent mobile AI market. Developers would no longer be hampered by system limitations that prevent their AI tools from becoming the true “default” assistant on Android phones. This specification proceeding signals clearly that AI services, particularly those tied directly to platform control over device features and user data, are now squarely within the scope of DMA enforcement. The EU is taking preventative measures to ensure that platform control does not tilt these rapidly evolving markets before competitors have a legitimate chance to scale and innovate. Focus Area 2: Mandated Search Data Sharing Perhaps the most disruptive aspect for the core search industry and SEO professionals is the second specification proceeding, which addresses how Google must share critical, anonymized search data with competing search engines. Google Search is the world’s most dominant search engine, and its competitive advantage rests largely on the massive volumes of proprietary user interaction data it collects daily. This dataset informs everything from ranking algorithms to new feature development. The DMA seeks to reduce this asymmetry by mandating data sharing under “fair, reasonable, and non-discriminatory” (FRAND) terms. The Specific Data Points Under Scrutiny The mandate requires Google to share several highly valuable categories of data: 1. **Search Ranking Data:** Information pertaining to the results that appear for specific queries and their relative positions. 2. **Query Data:** The raw, anonymized text of search queries entered by users. 3. **Click Data:** Records indicating which results users ultimately clicked on. 4. **View Data:** Information related to how many users viewed a specific result page. Access to this kind of behavioral

Uncategorized

56% Of CEOs Report No Revenue Gains From AI: PwC Survey via @sejournal, @MattGSouthern

The AI Hype vs. Business Reality: Unpacking the PwC Findings The current business landscape is saturated with talk of Artificial Intelligence, particularly the revolutionary potential of generative AI. CEOs worldwide are pouring billions into sophisticated platforms, believing they are investing in the essential fuel for future growth and operational superiority. Yet, a crucial survey from PwC reveals a sobering truth: for a significant majority of global business leaders, these massive AI investments have yet to translate into tangible financial returns. The extensive survey, which polled over 4,000 CEOs spanning 95 countries, delivered a major reality check to the fervent optimism surrounding digital transformation. A striking 56% of these chief executives reported that they have not yet realized any meaningful revenue gains or cost benefits stemming from their AI initiatives. This statistic highlights a critical disconnect between the promise of AI technology and the practical realities of organizational deployment and value extraction. While the AI sector continues to hit new valuation highs and technical capabilities seem to expand daily, organizations are struggling to convert laboratory success into enterprise ROI. Understanding why more than half of global business leaders feel this dissatisfaction is essential for charting a course toward successful, sustainable digital transformation. Diagnosing the Disconnect: Why AI Investments Stall The finding that 56% of CEOs report stagnant revenue or cost reduction is not necessarily an indictment of AI technology itself, but rather a reflection of the inherent difficulty in integrating advanced, complex systems into existing business structures. Achieving a genuine return on investment (ROI) from AI requires much more than simply purchasing software or subscribing to an API; it demands fundamental changes across data strategy, talent acquisition, and organizational workflow. The Foundational Challenge of Data Readiness One of the most persistent hurdles preventing successful AI adoption is the state of a company’s foundational data infrastructure. AI models—especially complex machine learning (ML) and generative AI systems—are only as good as the data they are trained on and fed with. Many organizations, particularly older enterprises undergoing digital transformation, possess decades of siloed, inconsistent, and unstructured data. Data cleanliness, accessibility, and governance are often overlooked in the rush to implement cutting-edge models. If the underlying data is incomplete, biased, or poorly organized, the AI output will be unreliable, leading to failed proof-of-concepts (PoCs) and a complete lack of measurable business benefit. CEOs who bypass the costly and arduous process of data modernization will inevitably find their AI investments yielding zero returns. Undefined Use Cases and Lack of Strategic Alignment A common failure point uncovered by business analysts is the tendency for companies to implement AI technology simply because competitors are doing so, or because of a generalized fear of being left behind. This approach results in “AI for AI’s sake,” where technology is deployed without a clear, quantifiable business problem to solve. Successful digital transformation requires precise identification of key organizational pain points—whether it is customer service automation, supply chain prediction, or content generation efficiency. If a business unit implements a large language model (LLM) but hasn’t defined clear key performance indicators (KPIs) for measuring success, or if the chosen use case doesn’t align with core business strategy, the effort will burn resources without demonstrating value. For the 56% of CEOs surveyed, a lack of rigorous strategic planning likely contributed to the inability to measure or generate financial uplift. The Critical Role of Talent and Skill Gaps Even the most sophisticated AI systems require skilled human oversight and management. The current global talent market is experiencing a severe shortage of professionals capable of bridging the gap between theoretical AI capabilities and practical business implementation. This includes data scientists, ML engineers, AI ethicists, and crucially, business leaders who understand how to integrate these tools into operational workflows. A CEO may invest heavily in technology, but if the staff lacks the skills to maintain the models, interpret the results, and drive adoption across departments, the project will falter. The investment in human capital—upskilling existing teams and aggressively recruiting specialized talent—is often underestimated in initial AI budgets, resulting in deployment failures and stalled ROI. Navigating the AI Hype Cycle: Patience and Perspective The findings from the PwC survey reflect a pattern observed frequently throughout the history of enterprise technology adoption, often summarized by the Gartner Hype Cycle. AI, and particularly generative AI, is currently transitioning from the “Peak of Inflated Expectations” toward the “Trough of Disillusionment.” The Trough of Disillusionment In the initial hype phase, the potential of a new technology is dramatically overstated, leading to massive, immediate investment expectations. When those expectations are not met within the first 12 to 24 months, businesses experience a period of disappointment—the Trough of Disillusionment. The 56% figure reported by PwC strongly suggests that many large organizations are currently experiencing this phase. This disillusionment is crucial because it forces companies to pivot from exploratory, experimental projects toward disciplined, targeted integration. Genuine ROI from AI is rarely instantaneous. It often requires systemic overhauls, regulatory compliance adjustments, and significant change management—processes that inherently take years, not quarters, to fully mature. CEOs who understand this temporal context are better positioned to endure the initial period of low returns and realize long-term, compounding benefits. Operational Efficiency vs. Direct Revenue Generation It is important to differentiate between two primary ways AI delivers value: cost reduction (operational efficiency) and direct revenue generation. Many organizations that *are* seeing success started with projects focused on reducing expenditure through automation. Examples include using AI for robotic process automation (RPA) in back-office functions, optimizing internal IT ticketing systems, or automating quality control in manufacturing. These gains often manifest as cost avoidance rather than immediate topline revenue increases. For organizations that reported zero gains, it might indicate that they prematurely jumped to complex revenue-generating applications (like hyper-personalized marketing or algorithmic trading) before establishing the simpler, more stable foundations of operational efficiency. Strategic AI adoption often dictates a phased approach: first, stabilize operations and reduce costs; second, leverage insights to optimize customer experience; third, innovate new products and revenue streams. Sectoral

Uncategorized

Ask A PPC: What Is The PPC Manager’s Role In The AI Era? via @sejournal, @navahf

The Digital Transformation of Paid Search Management The landscape of Pay-Per-Click (PPC) advertising has undergone a seismic shift, fundamentally driven by the rapid integration of Artificial Intelligence (AI) and machine learning. Historically, the PPC manager’s role was defined by meticulous, repetitive tasks: manual bid adjustments, keyword scrubbing, and endless A/B testing cycles. Today, AI handles these operational burdens with superior speed and scale. This widespread automation has sparked intense debate about the necessity of the human expert. However, rather than rendering the PPC manager obsolete, the AI revolution elevates the role from tactical executor to strategic overseer and data custodian. Success in modern paid search is no longer about mastering interfaces; it’s about defining strategy, ensuring data integrity, and applying the critical human judgment that algorithms simply cannot replicate. This transformation reframes the entire AI conversation around accountability and sophisticated human guidance. The Evolution of the PPC Manager: From Operator to Architect Machine learning has automated vast swathes of campaign execution. Smart Bidding, Dynamic Search Ads (DSA), and fully automated solutions like Performance Max (PMax) on Google Ads now manage the day-to-day fluctuations of the auction environment. This technological leap removes the need for constant, low-level operational intervention, but it places a far greater premium on the setup, maintenance, and high-level strategy that guides the AI. Defining Campaign Objectives and Frameworks The core responsibility of the modern PPC manager is now architecture. They must serve as the principal designer of the campaign structure, ensuring the AI operates within well-defined, measurable parameters aligned with overarching business goals. The AI is a powerful tool, but it is purely instrumental; it needs human direction to understand the difference between a high-volume click and a genuinely high-value customer. This includes setting appropriate targets (Target ROAS, Target CPA), selecting the correct audiences, and configuring the campaign structure to segment data signals effectively. If the framework is flawed, the AI will optimize tirelessly toward a suboptimal outcome, wasting significant budget along the way. The PPC manager’s expertise is crucial for translating broad business KPIs (e.g., market penetration, lifetime customer value) into executable, algorithmic targets. Mastering Automated Bidding Systems While AI handles the actual bidding decisions millions of times per second, the PPC manager retains full responsibility for governing the bidding strategy. This involves selecting the most appropriate Smart Bidding strategy for the campaign phase, adjusting seasonality inputs, and providing strategic budget pacing. Furthermore, the manager must understand the limitations and constraints of the chosen algorithms. For instance, a switch to Target ROAS requires a thorough understanding of the necessary conversion volume and the historical data window the algorithm needs to learn effectively. This high-level technical proficiency ensures the AI is not starved of data or unnecessarily constrained by manual caps that counteract its optimization goals. Data Integrity: The Foundation of AI Success In the age of algorithmic advertising, data is the fuel, and the PPC manager is the primary quality control officer. The mantra “Garbage In, Garbage Out” has never been more relevant. If the data signals fed into the automated systems are inaccurate, delayed, or incomplete, the resulting optimization will be severely flawed, leading to poor ROI and misattributed results. Conversion Tracking and Measurement Accuracy Ensuring flawless conversion tracking is perhaps the most critical technical function remaining for the PPC professional. This goes far beyond merely implementing a pixel. It involves sophisticated setup of enhanced conversions, server-side tracking (API integration), and robust verification across all touchpoints, especially in complex multi-platform environments. The manager must routinely audit the conversion paths, ensuring values are accurately passed, transaction IDs are unique, and deduplication protocols are functioning correctly. Any discrepancy in reported conversions directly poisons the machine learning model, causing it to incorrectly value specific keywords, audiences, or placements. The Critical Role of First-Party Data Management As third-party cookies diminish, the reliance on proprietary first-party data grows exponentially. The modern PPC manager is directly responsible for curating, segmenting, and activating these valuable audience lists. This includes: **CRM Integration:** Ensuring seamless and real-time synchronization between the Customer Relationship Management (CRM) system and advertising platforms. **Audience Segmentation:** Creating highly granular customer lists (e.g., high-value repeat purchasers, users who abandoned cart 3+ times, recent returners) that serve as potent signals for AI targeting models. **Exclusion Lists:** Maintaining stringent exclusion lists to prevent wasted spend on non-converting users or internal employees. By providing the AI with high-quality, ethically sourced first-party data, the PPC manager drastically improves the algorithm’s ability to find lookalike audiences and tailor messaging with high precision. Feed Optimization for Retail and E-commerce For any business utilizing Shopping campaigns or PMax for product promotion, the manager’s oversight of the product feed becomes paramount. The feed is the literal source of truth for the AI, governing inventory, pricing, descriptions, and category placement. AI relies heavily on attributes like product type, custom labels, and accurate categorization to identify the right moment to serve an ad. The PPC professional must work closely with data teams to optimize titles for search intent, ensure competitive pricing attributes are visible, and strategically use custom labels to segment high-margin products or manage seasonal inventory, thereby providing necessary strategic inputs that the AI then executes upon. The Indispensable Element: Human Judgment and Responsibility While AI excels at processing massive datasets and identifying patterns, it inherently lacks consciousness, intuition, and ethical understanding. This gap is where human judgment becomes the defining differentiator for successful PPC campaigns. Interpreting Anomalies and Contextualizing Performance AI can flag performance changes, but it cannot always explain the “why.” A sudden dip in conversion rate might be attributed by the AI to a shift in bidding competition, but the PPC manager is equipped to look outside the platform. They connect the drop to external factors—a competitor’s PR crisis, a shift in global supply chains, a major economic event, or even a technical outage on the client’s website. This contextual intelligence allows the manager to override or modify AI behavior temporarily, preventing the system from overreacting to short-term noise or optimizing based on misleading signals. Creative

Uncategorized

ChatGPT ads come with premium prices — and limited data

The New Frontier of Digital Advertising: Generative AI The rapid ascent of ChatGPT from an experimental chatbot to a global platform with hundreds of millions of users has inevitably led to one major business transition: monetization. OpenAI, the company behind the groundbreaking generative AI tool, is now positioning itself to capture significant revenue by introducing an advertising model within the conversational interface. However, this move introduces a complex paradox for digital marketers: the *ChatGPT ads* platform demands a premium price point while simultaneously offering significantly less data visibility than established advertising ecosystems. As marketers and publishers grapple with the implications of the “agentic web,” the initial details surrounding OpenAI’s advertising pitch suggest a unique, trust-first approach that prioritizes user experience and privacy over granular performance tracking. Understanding this delicate balance between high cost and limited data is crucial for any brand looking to be an early adopter in the AI advertising space. The Sticker Shock: Deconstructing the Premium CPM OpenAI is setting the bar high for entry into its advertising ecosystem. Reports indicate that the company is pitching premium-priced ad slots within ChatGPT, targeting a cost per thousand impressions (CPM) of approximately $60. Analyzing the $60 CPM Benchmark To understand the weight of this pricing, it must be benchmarked against industry standards. A $60 CPM is roughly three times higher than the typical CPM rates seen on behemoth social platforms like Meta (Facebook and Instagram). In the established world of performance advertising, high prices are usually justified by highly specific targeting capabilities and robust, end-to-end attribution data. The advertiser pays more because they know precisely who is viewing the ad, and critically, whether that view eventually led to a purchase, sign-up, or conversion event. OpenAI’s decision to price its inventory at such a high level, especially without offering the accompanying detailed conversion data, signals a significant strategic decision: they are betting on the quality of attention and the novelty of the environment itself. The Rationale for Premium Pricing: Attention Economy and Context Why should a brand pay three times the standard rate for an ad impression? The answer lies in the fundamentally unique nature of the ChatGPT experience. Unlike the fragmented attention users give to scrolling feeds or crowded websites, interaction with a generative AI tool like ChatGPT is highly focused. Users are actively engaged in a specific task, searching for deep information, generating content, or solving a problem. This creates a high-attention environment where an integrated ad impression is likely to have maximum impact. OpenAI is positioning its advertising space not as a massive, low-cost scale environment, but as a premium, high-impact channel. The value proposition shifts from “reach as many people as possible cheaply” to “reach highly engaged people in a contextually relevant moment.” For brands focused on high-quality exposure and establishing themselves as thought leaders, this concentrated attention may indeed justify the significant price tag. Contextual Ad Placement vs. Traditional Behavioral Targeting The ChatGPT environment naturally lends itself to highly contextual advertising. For instance, if a user is prompting the AI for information on comparing high-end digital cameras, an ad for a specific camera brand or a photography course is highly relevant. This approach contrasts sharply with the behavioral targeting models that dominate platforms like Google and Meta, which rely on tracking user history across the web. Because ChatGPT advertising is deeply integrated into the conversation thread, the relevance is immediate and temporal, making the ad feel less intrusive and more helpful—a key factor in user acceptance of advertising within a utility tool. The Data Paradox: Limited Visibility for Advertisers While the premium price reflects the quality of attention, the limited data reporting presents the most significant hurdle for sophisticated digital marketers. The foundation of modern performance marketing rests on the ability to track the user journey precisely. OpenAI is intentionally limiting this visibility. What Data is Available: Impressions and Clicks Advertisers utilizing the initial rollout of *ChatGPT ads* will receive only high-level reporting metrics. This primarily includes the total number of impressions (views) and the total number of clicks the ad generated. These are essential metrics, but they only represent the first stage of the marketing funnel. For brands primarily focused on awareness and top-of-funnel reach, impression and click data are sufficient for gauging initial exposure and engagement rates. They can determine the click-through rate (CTR) and the effective cost per click (CPC). The Critical Gap in Downstream Attribution The major sticking point for performance-focused marketers is the absence of downstream attribution data. Advertisers will have no insight into actions that occur after the user leaves the ChatGPT environment. This means if a user clicks an ad for a new software subscription within ChatGPT, and subsequently purchases that subscription on the advertiser’s website, OpenAI will not provide the data linkage necessary to confirm that conversion. Metrics crucial for evaluating campaign success, such as Cost Per Acquisition (CPA), Return on Ad Spend (ROAS), and Lifetime Value (LTV), become impossible to calculate directly using OpenAI’s provided reporting. This constraint forces marketers to rely on either very broad, lagged measurements (like correlating an increase in direct website traffic with the ad run dates) or more complex, privacy-preserving measurement techniques, such as statistical modeling or incremental lift studies performed by third parties. OpenAI’s Commitment to Privacy as a Business Model The limitations on data reporting are not an accident or an oversight; they are a direct consequence of OpenAI’s core promise to its user base. This commitment to data privacy is both a functional limitation for advertisers and a powerful market differentiator for the company. The Non-Negotiable Stance on User Data OpenAI has publicly committed to two fundamental principles: 1. **Never selling user data to advertisers.** 2. **Keeping user conversations private and protected.** These commitments create a high wall between the conversational data that makes ChatGPT powerful and the commercial demands of advertisers seeking granular targeting. Unlike Meta or Google, whose business models are predicated on deep profile creation derived from user activity, OpenAI is drawing a clear line,

Uncategorized

Google research points to a post-query future for search intent

The Impending Revolution in Search Understanding For decades, the foundation of digital search has been the query. A user types keywords or phrases into a search bar, and the system responds with relevant results. This transactional model, while incredibly powerful, is now facing a profound transformation driven by advancements in artificial intelligence. Google, the undisputed leader in search, is actively steering toward a future where it understands a user’s underlying goal—or intent—long before a single query is typed. Recent research unveiled by Google points to the viability of a “post-query” search environment. This shift relies on inferring user intent directly from behavior—the taps, scrolls, clicks, and screen changes that define interaction within apps and websites. The groundbreaking aspect of this research is not merely the ability to extract intent, but the mechanism: successfully deploying small, efficient AI models directly on user devices, thereby matching the performance of much larger, more costly, and cloud-dependent systems like Gemini 1.5 Pro. This development carries massive implications for search engine optimization (SEO) and digital strategy. If successful, optimization will shift from focusing solely on typed keywords to maximizing the clarity and efficiency of the overall user journey. The Evolution of Search Intent In the world of SEO, search intent has traditionally been categorized into three or four types: informational (seeking knowledge), navigational (trying to reach a specific site), transactional (looking to buy or complete an action), and commercial investigation (researching before a purchase). These classifications are derived directly from the content of the search query itself. The post-query future proposed by Google represents a radical departure. Intent is no longer reactive—a response to a typed string—but proactive, inferred through context. The user’s interaction data becomes the primary signal. Why User Behavior Is the New Keyword To move beyond the search box, the AI system must observe patterns in user interaction. When a user opens an app, scrolls down a product page, taps a sizing guide, and then navigates to a shopping cart icon, these discrete actions collectively reveal a high-level goal, such as “purchase running shoes.” This form of intent extraction requires sophisticated Multimodal Large Language Models (MLLMs) capable of processing not just text, but also visual screen information (the “multimodal” aspect) and temporal sequences (the “over time” aspect). Historically, achieving this level of complex reasoning required enormous computational resources, typically housed in centralized cloud servers. The Latency, Cost, and Privacy Problem of Cloud AI While powerful large language models (LLMs) like those in the Gemini family can certainly infer intent from comprehensive user behavior data, running these models centrally presents three critical roadblocks: Latency and Speed: Cloud-based systems introduce network delay. For real-time intent extraction necessary for agentic AI (systems that anticipate needs instantly), this latency is unacceptable. Computational Cost: Large models consume immense energy and computing power. Running trillions of parameters continuously for every user interaction across billions of devices is financially prohibitive. Privacy Concerns: User behavior data—taps, clicks, scrolling patterns, and app usage history—is highly sensitive. Sending this continuous stream of detailed activity to a central server raises significant privacy and security risks, which could deter user adoption. The goal, therefore, became clear: how to deliver “big results” using “small models” that could operate entirely on the device, minimizing data transfer and maximizing user control. Decomposition: The Strategic AI Breakthrough The solution, detailed in the research paper titled, “Small Models, Big Results: Achieving Superior Intent Extraction through Decomposition,” presented at EMNLP 2025, lies in simplifying the complex task of intent understanding through decomposition. Instead of asking one small model to synthesize a vast, messy stream of historical data and deliver a final goal, Google researchers broke the process into two smaller, sequential steps that even comparatively small MLLMs can execute with high accuracy. This simple architectural shift allows small, resource-efficient models to perform nearly as well as the massive, general-purpose models running in the cloud. Step 1: Localized Interaction Summarization The first stage of the decomposition focuses on capturing “micro-intents” from immediate user actions. This step is executed by a small AI model running directly on the device. For every screen interaction—a tap, a scroll event, or a screen change—the model generates three specific pieces of information: Screen Content: A representation of what was visually present on the screen at that moment. User Action: The precise input performed by the user (e.g., tapped the button labeled “Add to Cart”). Tentative Guess: A preliminary, localized guess about the user’s intent for *that specific action*. By keeping the focus narrow and immediate, this model avoids the heavy burden of trying to remember and reason over the entire session history. Step 2: Factual Intent Aggregation The second stage employs another small, specialized model to synthesize the overall session goal. Crucially, this model does not re-reason over the raw user data. Instead, it reviews the factual summaries generated in Step 1. The second model performs a filtering and aggregation task: It reviews only the established facts (screen content and user actions) from the sequence of micro-summaries. It purposefully ignores the “tentative guesses” or speculative reasoning generated in Step 1. It produces one concise, objective statement summarizing the user’s overall goal for the entire session. This two-step process bypasses a common failure mode inherent in small LLMs: when forced to process long, high-noise data histories end-to-end, they often suffer from “catastrophic forgetting” or inaccurate reasoning. By ensuring the inputs to the final aggregator are clean, objective facts, the system significantly improves accuracy and reliability. Validating Performance with Bi-Fact Scoring To rigorously measure the success of this decomposed approach, Google researchers needed a metric more precise than subjective evaluation. Traditional methods often just ask if an inferred intent summary “looks similar” to the correct answer, which fails to pinpoint exactly *why* a model succeeded or failed. The solution was the Bi-Fact scoring methodology. Bi-Fact focuses on measuring which facts about the user session are included in the generated intent summary versus which facts are missing, and most importantly, which facts were invented (hallucinated) by the AI.

Scroll to Top