Author name: aftabkhannewemail@gmail.com

Uncategorized

Google Text Ad Click Share Rises Sharply In Some Verticals via @sejournal, @MattGSouthern

Understanding the Shift: The Resilience of Google Text Ads In the rapidly evolving landscape of digital marketing, the dominance of Google Search remains a constant. However, the way users interact with the Search Engine Results Page (SERP) is undergoing a significant transformation. Recent data from a comprehensive analysis of approximately 16,000 U.S.-based search queries reveals a surprising trend: text ads are not just maintaining their relevance; they are seeing a sharp increase in click share. In several key verticals, text ads have increased their share of clicks by 7 to 13 percentage points year-over-year. For years, digital marketers have debated the “death” of the traditional text ad. With the rise of Shopping ads, local map packs, and the integration of AI-driven responses, many assumed that the standard “Sponsored” text result would eventually fade into the background. This latest data suggests the opposite. Not only are text ads surviving, but they are also capturing a larger portion of the user’s attention and action than they were twelve months ago. To understand why this is happening, we must look at the convergence of Google’s UI updates, changing consumer behavior, and the increasing complexity of organic search. The Data Behind the Surge The analysis of 16,000 queries provides a robust sample size across a variety of industries. A 7-13 point increase in click share is not a minor statistical fluctuation; it represents a major shift in how traffic is distributed. Click share refers to the percentage of total clicks that a specific ad format receives out of the total clicks available. When text ads gain share, that traffic is typically being diverted from organic listings or other ad formats like Google Shopping. This growth is particularly pronounced in “high-intent” verticals. These are sectors where users are looking for specific services, professional advice, or complex products that require more information than a simple product image can provide. In these instances, the descriptive nature of a text ad—enhanced by modern assets and extensions—provides the necessary context to win the click. Key Verticals Leading the Change While the rise in click share is visible across the board, certain industries are experiencing more dramatic shifts. Identifying these verticals helps explain the underlying mechanics of modern search behavior. Financial Services and Insurance In the financial sector, trust and detail are paramount. Users searching for “home insurance quotes” or “best high-yield savings accounts” are rarely looking for a quick visual checkout. They are looking for authority. Google Text Ads allow advertisers to utilize callouts, structured snippets, and sitelink extensions to highlight specific benefits like “A+ Rating,” “Instant Online Quotes,” or “No Hidden Fees.” The increase in click share here suggests that users find the structured, text-heavy information in ads more helpful than the often cluttered organic results. Legal and Professional Services Legal searches are another area where text ads dominate. Because legal services are highly localized and specific to the user’s needs, text-based messaging allows for precise targeting. The shift here may also be attributed to the prominence of Local Services Ads (LSAs) working in tandem with traditional text ads, creating a “walled garden” of paid results at the top of the SERP that pushes organic content further down the page. B2B Software and SaaS The Business-to-Business (B2B) sector has seen a significant migration toward text ads. As organic search becomes more competitive with “Best of” lists and aggregator sites, many SaaS companies are relying on text ads to capture users at the top of the funnel. By using text ads, they can control the narrative and direct users to specific landing pages designed for conversion, rather than letting them land on a generic blog post through organic search. The AI Influence: SGE and AI Overviews One cannot discuss the current state of Google Search without mentioning the Search Generative Experience (SGE), now known as AI Overviews. The integration of AI-generated summaries at the top of the SERP has fundamentally changed the visual real estate of the page. Paradoxically, the rise of AI may be a primary driver behind the increased click share for text ads. When an AI Overview appears, it occupies a significant portion of the “above the fold” area. Below or alongside these AI responses, Google strategically places text ads. Because AI Overviews often synthesize information from multiple sources, some users may find the summary sufficient, while others—who need a specific service—look for the most prominent link to a provider. Text ads are often positioned even more prominently than before to ensure Google maintains its primary revenue stream while experimenting with AI. This creates a scenario where organic results are pushed so far down the page that the “Sponsored” text ads become the only viable option for users who want to take immediate action. The Evolution of the Text Ad Format Another reason for the rising click share is that the “text ad” of 2024 is vastly different from the text ad of 2014. Google has continuously added features that make these ads more interactive, visual, and persuasive. Responsive Search Ads (RSAs) The move to Responsive Search Ads as the default format has allowed Google’s machine learning to optimize which headlines and descriptions are shown to which users. By testing thousands of combinations, the system serves the most relevant version of an ad for every specific query. This increased relevance naturally leads to higher click-through rates (CTR) and a larger slice of the click share. Asset Integration (Formerly Extensions) Modern text ads are no longer just text. They now include image assets, business logos, and brand names. These visual cues break the monotony of the text and provide the eye-catching appeal previously reserved for Shopping ads. When a text ad for a travel agency includes a high-quality image of a destination and a “Sponsored” favicon, it gains the visual authority that used to be the hallmark of organic results. The “Organic-Adjacent” Design Google has subtly updated the styling of text ads to blend more seamlessly with organic results. The transition from colorful backgrounds to a

Uncategorized

The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig

The Evolution of Search and the AI Attention Paradigm The landscape of search engine optimization is undergoing its most significant transformation since the introduction of the Google algorithm itself. For decades, SEO professionals and content creators have focused on satisfying the requirements of traditional search engines: keyword density, backlink profiles, and user signals. However, with the rise of Large Language Models (LLMs) like ChatGPT, Claude, and Gemini, a new frontier has emerged. This frontier is governed not just by “relevance” in the traditional sense, but by the science of how AI pays attention. Recent research into 1.2 million ChatGPT citations has unveiled a startling reality. The content structures that dominated the last decade—specifically the “ultimate guide” format and long-winded, narrative-driven articles—are being bypassed by generative engines. Instead, AI models show a distinct preference for front-loaded, entity-rich, and definitive writing. To remain visible in an era of Generative Engine Optimization (GEO), publishers must understand the architectural biases of the machines that are now filtering the world’s information. Understanding the Transformer Architecture and Self-Attention To understand how AI pays attention, we must first look at the underlying technology: the Transformer model. Unlike older neural networks that processed data sequentially (one word at a time), Transformers use a mechanism called “Self-Attention.” This allows the model to look at every word in a sentence or paragraph simultaneously and determine which words are most relevant to the meaning of the others. When an AI model “reads” your content to decide if it should be cited in a response, it isn’t reading like a human. It is assigning numerical weights to tokens (segments of words). These weights determine the “attention” the model pays to specific parts of your text. The 1.2 million citation study indicates that these weights are not distributed evenly. There is a mathematical bias toward specific structural elements that make it easier for the model to extract and synthesize information. The Failure of the “Ultimate Guide” in the AI Era For years, the “Ultimate Guide to [Topic]” was the gold standard of SEO. These 5,000-word behemoths were designed to cover every possible sub-topic, capturing long-tail keywords and keeping users on the page. In the eyes of a traditional search engine, this signaled authority and comprehensiveness. In the eyes of an AI, however, these guides are often seen as inefficient. LLMs operate within a “context window,” which is the amount of text they can process at one time. When a model searches the web to provide a real-time answer, it is looking for the “path of least resistance” to a factual statement. Long introductions, anecdotal fluff, and buried conclusions force the model to expend more computational resources to find the core answer. As a result, AI citations are increasingly favoring leaner, more direct sources that get to the point immediately. Front-Loading: The Prime Real Estate of Information One of the most significant findings in the analysis of ChatGPT citations is the power of front-loading. Front-loading is the practice of placing the most important information, the primary definition, or the direct answer to a query at the very beginning of a document or section. This preference exists because of “positional bias” in LLMs. Research has shown that models are more likely to remember and prioritize information found at the beginning or the end of a prompt or a retrieved document—a phenomenon often called the “Lost in the Middle” effect. When an AI agent crawls a page to fulfill a user request, it prioritizes the first few hundred tokens. If your article spends the first three paragraphs “setting the stage” with vague generalities, the AI may determine that your source is less relevant than a competitor who provides a definitive statement in the first sentence. The Role of Entity-Rich Writing In the world of AI, entities are the building blocks of knowledge. An entity is a well-defined person, place, thing, or concept. Google’s Knowledge Graph started this shift, but LLMs have taken it to the next level. They understand the world as a network of relationships between entities. The study of 1.2 million citations suggests that AI models are significantly more likely to cite content that is “entity-rich.” This doesn’t mean keyword stuffing. It means using precise terminology and clearly defining the relationships between concepts. For example, instead of saying “The software helps you manage your work better,” an entity-rich sentence would be “The [Product Name] project management platform integrates with [Entity B] and [Entity C] to automate [Process X].” By using specific nouns and clear attributes, you provide the AI with “hooks” that it can easily map to its internal knowledge base. This reduces the cognitive load for the model, making your content a more attractive source for a citation. Definitive vs. Ambiguous Content Traditionally, writers are taught to use nuance. We use phrases like “it depends,” “some might argue,” or “it is generally considered.” While this is often more accurate in a human sense, AI models currently have a bias toward definitive writing. When ChatGPT or a similar engine looks for a source to answer a prompt like “What is the best temperature for brewing espresso?”, it looks for a source that states, “The optimal temperature for brewing espresso is 195°F to 205°F.” A source that enters a long philosophical debate about the subjectivity of taste without providing a hard number is less likely to be cited as the primary source of truth. To optimize for AI attention, writers must balance nuance with clarity. Use “The inverted pyramid” style of journalism: give the definitive answer first, and then provide the supporting context and nuance later. This satisfies the AI’s need for a quick extraction while maintaining the human reader’s need for depth. The “Lost in the Middle” Phenomenon in Content Strategy The “Lost in the Middle” phenomenon is a critical concept for anyone involved in digital publishing. Studies have shown that when LLMs are given long documents to analyze, their ability to accurately retrieve information from the middle of those documents drops significantly

Uncategorized

The latest jobs in search marketing

The landscape of search engine marketing is undergoing a seismic shift. As we navigate through 2026, the traditional boundaries of SEO and PPC are being redrawn by generative AI, answer engine optimization (AEO), and a renewed focus on user intent. For professionals in this space, the job market is more dynamic than ever, offering a blend of high-level strategic roles and specialized technical positions. Whether you are an experienced strategist looking to lead an enterprise team or a specialist focused on the technical nuances of crawlability and backlink profiles, the current hiring cycle reflects a healthy demand for talent. Below is a comprehensive look at the latest job openings in search marketing, ranging from boutique agencies to Fortune 300 companies. Newest SEO and Organic Growth Jobs Search Engine Optimization has evolved beyond simple keyword targeting. Today’s roles require a deep understanding of how LLMs (Large Language Models) process information and how brands can maintain visibility in an “AI-first” search environment. Here are the most recent SEO positions currently available. SEO Specialist – Full-time Contract (Torque) Date Posted: February 20, 2026 Torque is currently seeking an SEO Specialist to join an internal team operating as an SEO helpdesk. This role is unique in its structure, supporting a massive international client within the hospitality sector that manages over 120 global locations. You will be responsible for triaging and processing a high volume of SEO-related requests, ensuring that each location adheres to best practices while maintaining the brand’s overarching search authority. Senior SEO Analyst (Mason Interactive) Date Posted: February 20, 2026 | Salary: Up to $80,000 At Mason Interactive, the Senior SEO Analyst role is designed for a problem-solver who can bridge the gap between technical data and creative strategy. You will lead optimization efforts for a diverse portfolio of clients, leveraging deep analytical insights to improve search engine rankings and drive meaningful traffic growth. Growth/Digital Performance Marketing Lead (Thought Industries Inc.) Date Posted: February 19, 2026 Thought Industries is looking for a Growth and Performance Lead with a heavy emphasis on SEO, AEO, and Conversion Rate Optimization (CRO). Based in Boston, this role is critical for an enterprise that helps other organizations maximize customer lifetime value. This position is ideal for someone who views SEO through the lens of the entire customer lifecycle, integrating UX and search visibility to drive ROI. Marketing Strategist (Mutual of Omaha Mortgage) Date Posted: February 19, 2026 | Location: Hybrid – Irvine, CA Mutual of Omaha, a Fortune 300 name, is hiring a Marketing Strategist focusing on SEO and Online Reputation Management (ORM). In the mortgage industry, trust and visibility are paramount. This role involves using SEO and marketing automation to protect the brand’s digital footprint and ensure that potential homeowners can find and trust their services easily. Technical SEO Engineer (Nectiv) Date Posted: February 19, 2026 Nectiv is a specialized organic growth consultancy that prioritizes systemic solutions over one-off fixes. They are searching for a Technical SEO Engineer to solve complex challenges involving information architecture, crawl optimization, and AI answer engine optimization (AEO/GEO). This is a highly technical role suited for someone who enjoys the “under the hood” aspects of search engines and SaaS platform growth. Off-Page SEO Specialist (ASG) Date Posted: February 19, 2026 | Salary: $15/hour base minimum | Location: Fully Remote For those specializing in authority building, ASG is hiring an Off-Page SEO Specialist. With over five years of experience required, the specialist will manage off-page operations for various service-based brands in the HVAC, plumbing, and electrical industries. This role focuses on scaling outreach and managing high-quality backlink acquisitions. Head of SEO/AEO (Tilt) Date Posted: February 18, 2026 Tilt (formerly Empower) is a fintech disruptor that uses machine learning to redefine credit accessibility. They are looking for a Head of SEO/AEO to lead their organic discovery strategy. As search evolves toward AI-driven discovery, this leader will be responsible for ensuring Tilt’s financial products are visible not just on Google, but across all emerging AI search surfaces. Senior SEO Strategist (Nectiv) Date Posted: February 18, 2026 In addition to their technical needs, Nectiv is hiring a Senior SEO Strategist. This role focuses on performance analysis and content strategy for SaaS and enterprise brands. If you have a knack for translating search data into actionable growth plans for high-growth marketplaces, this is a prime opportunity. Senior Content Marketing Manager (Southwest Airlines / Wpromote) Date Posted: February 18, 2026 | Location: Onsite – Dallas, TX Wpromote is seeking a Senior Content Marketing Manager to work exclusively with Southwest Airlines. This embedded role involves redefining how travel inspiration is discovered organically. You will lead the content strategy from within the Southwest offices, blending brand storytelling with high-level search discoverability. E-Commerce & Marketing Assistant (Zarifa USA) Date Posted: February 17, 2026 Zarifa USA is looking for a versatile E-Commerce and Marketing Assistant. This is an excellent role for someone with strong Photoshop skills and a curiosity for SEO, email marketing, and website management. It offers structured training for someone looking to build a career in digital commerce. Latest PPC and Paid Media Opportunities The world of paid search and performance marketing is moving toward “centralized hub” models where data-driven strategies dictate every dollar spent. Agencies and brands are currently seeking experts who can navigate Meta, Google Ads, and emerging social platforms with precision. Social Media Coordinator (DEL Records, Inc.) Date Posted: February 20, 2026 In the music and entertainment world, digital storytelling is everything. DEL Records is hiring a Social Media Coordinator to manage paid social campaigns and influencer partnerships. This role is perfect for a marketer who is passionate about music and knows how to connect fans with artists through creative social strategies. Director of Performance Marketing (AirSculpt) Date Posted: February 19, 2026 AirSculpt, a leader in body contouring treatments, is looking for a Director of Performance Marketing. With over 75,000 cases treated, the company needs a high-level strategist to oversee their customer acquisition funnels. This role involves managing significant budgets and optimizing the journey from

Uncategorized

Merchant Center flags feeds disruption

Understanding the Google Merchant Center Feed Disruption For e-commerce businesses and digital marketers, Google Merchant Center (GMC) serves as the indispensable bridge between a product warehouse and the global consumer. It is the engine that drives Google Shopping Ads, free product listings across the Search and Shopping tabs, and dynamic remarketing campaigns. When this engine experiences a mechanical failure, the ripple effects are felt across the entire retail ecosystem. Recently, Google officially acknowledged a service disruption affecting Merchant Center Feeds, a situation that has put advertisers and store owners on high alert. The disruption, which reportedly began on February 4, 2026, at 14:00 UTC, has been characterized by Google as an ongoing investigation. According to the official Merchant Center Status Dashboard, the most recent update on February 20, 2026, at 14:43 UTC, confirms that the tech giant is still investigating reports of issues with Feeds. While “service disruption” is a broad term, its implications for automated retail marketing are specific and potentially severe. What Are Merchant Center Feeds and Why Are They Failing? To understand the gravity of a feed disruption, one must understand what a feed actually does. In the simplest terms, a product feed is a file—usually in XML, TSV, or Google Sheets format—that contains a comprehensive list of the products you sell. This file includes critical attributes like product titles, descriptions, prices, availability, and high-resolution image URLs. Google uses this data to populate its Shopping results. There are several ways feeds are processed: through scheduled fetches where Google “grabs” the file from a server, via manual uploads, or through the Content API for Shopping, which allows for real-time updates. When a disruption is flagged in the Feed system, it suggests that the mechanism Google uses to ingest, process, or validate this data is malfunctioning. This could be due to internal server latency, database synchronization errors, or a bug in the automated validation scripts that check if a product meets Google’s strict policy guidelines. The Timeline of the Incident The duration of this particular incident is noteworthy. Beginning on February 4 and extending through late February, the timeframe suggests a complex issue rather than a localized glitch. For over two weeks, users have reported inconsistencies in how their product data is being handled. Google’s communication through the status dashboard remains cautious, stating simply, “We will provide more information shortly.” This lack of specific detail often indicates that the root cause is being meticulously isolated to prevent further data corruption across the massive GMC infrastructure. The Direct Impact on Shopping Ads and Free Listings The most immediate concern for any retailer is the performance of their paid campaigns. Google Shopping Ads are entirely dependent on feed data. If the feed processing system is disrupted, several critical failures can occur: 1. Stale Product Data If a retailer changes a product price or updates stock levels in their back-end system, but the Merchant Center feed is unable to process that update, the live ads will show “stale” data. This can lead to a poor user experience—where a customer clicks an ad for a $50 item only to find it costs $70 on the site—and can even result in automatic account warnings for price mismatches. 2. Delayed Product Approvals Whenever a new product is added to a feed, it must undergo a review process to ensure it complies with Google’s advertising policies. A disruption in the feed system often brings these approvals to a grinding halt. For businesses launching new seasonal collections or time-sensitive promotions, these delays can result in significant lost revenue as products sit “pending” instead of being served to potential buyers. 3. Unexpected Product Disapprovals Systemic disruptions can sometimes trigger “false positive” disapprovals. The automated systems that scan feeds might misinterpret data or fail to verify certain attributes, leading to products being removed from the auction entirely. When a high-volume SKU is suddenly disapproved during a disruption, it can cause a sharp drop in overall account traffic and conversions. 4. Loss of Visibility in Free Listings Beyond paid ads, Merchant Center feeds power the free listings found in the Shopping tab. These organic placements are a vital source of “free” traffic for many small to mid-sized retailers. A feed disruption means these listings may not refresh or, in worse cases, might disappear from the search results altogether until the system stabilizes. Navigating the Merchant Center Status Dashboard Google provides the Merchant Center Status Dashboard as the primary source of truth for these incidents. Unlike general Google Search outages, Merchant Center issues are often more granular. The dashboard tracks various components, including the Merchant Center UI, the Content API, and the Feeds processing system. The status of “Service Disruption” is a middle-ground classification. It indicates that the service is not entirely “down” (which would be a Service Outage), but it is not operating at its full capacity or intended reliability. For advertisers, this status is a signal to exercise caution. Large-scale changes to feed architecture or massive inventory uploads should ideally be postponed until the dashboard returns to a “Service Healthy” state (indicated by a green checkmark). Recommended Actions for Retailers and Advertisers While the disruption is in Google’s hands to fix, there are several steps e-commerce managers can take to mitigate the damage to their accounts. Monitor the Diagnostics Tab The Diagnostics tab within the Merchant Center interface is your best friend during a disruption. It provides a real-time breakdown of item-level issues. Look for a spike in “Processing” statuses or a sudden increase in “Disapproved” items. If you see a large number of items being flagged for issues that didn’t exist yesterday, it is likely a result of the ongoing system disruption rather than an error on your end. Check Your Primary Feed Fetch Schedule If you use a scheduled fetch, check the “Processing” history of your feed. If the last successful fetch was hours or days ago, you might try a manual upload to see if it bypasses the bottleneck. However, be aware that during a

Uncategorized

What’s next for PPC: AI, visual creative and new ad surfaces

The Evolution of Paid Search: Navigating the AI Frontier The landscape of Pay-Per-Click (PPC) advertising is currently undergoing its most significant transformation since the inception of the search engine. For decades, the industry was defined by the “keyword-to-click” model—a linear process where advertisers bid on specific terms to capture immediate intent. Today, that model is being dismantled and rebuilt by artificial intelligence, visual-first discovery, and a variety of new digital surfaces that extend far beyond the traditional search results page. In a recent deep-dive discussion at SMX Next, industry heavyweights Ginny Marvin, Google’s Ads Product Liaison, and Navah Hopkins, Microsoft’s Product Liaison, shared their visions for the future of the medium. Their insights suggest that the PPC professionals who thrive in the coming years will be those who transition from being tactical “button-pushers” to strategic “orchestrators” of AI systems and creative assets. Emerging Ad Formats and the Rise of AI Surfaces One of the most profound shifts in PPC is the diversification of where ads actually appear. We are moving away from a world where “search” only happens in a browser box. Instead, we are entering an era of conversational interfaces and immersive environments. Microsoft’s innovation in this space is particularly focused on AI-first formats. Navah Hopkins highlighted the emergence of “showroom ads,” a format designed specifically for the era of generative AI. Unlike a standard text ad, showroom ads allow users to engage and interact with a virtual space where the advertiser provides the core content and assets, while Microsoft’s Copilot ensures brand security and guides the conversation. This creates a high-intent, interactive experience that feels more like a concierge service than a traditional advertisement. Furthermore, the gaming industry is finally being recognized as a powerhouse ad channel. Historically, gaming ads have been viewed with skepticism by both marketers and players. Hopkins, an avid gamer herself, noted that while many users “justifiably hate the ads that serve on gaming surfaces”—often due to poor placement or irrelevance—the next generation of gaming ads will be different. By leveraging AI to ensure contextual relevance and immersive integration, these ads will become part of the experience rather than a disruption to it. Ginny Marvin echoed this sentiment, noting that the rise of conversational AI and visual discovery tools is fundamentally redefining what we call “intent.” In the past, intent was easy to categorize based on a keyword. Now, intent is dynamic. A user might start a journey with a vague visual search, move to a conversational AI query to narrow down their options, and finally convert on a completely different platform. PPC marketers must now prepare for a landscape where traditional search is merely one of many touchpoints in a complex, AI-mediated journey. The Visual Revolution: Creative as a Performance Driver For a long time, PPC was a text-heavy discipline. High-performing copy and meticulous keyword matching were the primary levers of success. However, as platforms like Google and Microsoft lean into visual-heavy formats, the role of creative has shifted from the periphery to the center of the strategy. Navah Hopkins simplified the logic behind this shift: “Most people are visual learners.” In an information-dense digital world, an image or video can convey brand value and product utility far faster than a paragraph of text. Hopkins challenged performance marketers to abandon the old-school assumption that visuals are only for “top-of-funnel” brand awareness or remarketing. Today, visual content belongs at every single stage of the funnel, including the final conversion point. Ginny Marvin expanded on this, explaining that leading with brand-forward visuals is no longer optional. In the current ecosystem, your creative assets are responsible for telling your story, driving discovery, and prompting action simultaneously. This is because AI-driven ad platforms, such as Google’s Demand Gen or Microsoft’s Audience Network, rely on a robust library of images and videos to “assemble” the right message for the right user at the right moment. The modern PPC strategy requires marketers to understand their brand’s unique positioning and reflect it consistently across a massive library of assets. If the AI doesn’t have the right visual “ingredients” to work with, the campaign’s performance will inevitably plateau, regardless of how high the budget is. Dispelling the Myths of AI in Creative Production As AI tools for image and text generation become more accessible, a dangerous misconception has taken root: that AI can, or should, replace the human creative process entirely. Both Marvin and Hopkins were quick to debunk this myth. Hopkins issued a stern caution against over-relying on AI to build entire creative libraries. “AI is not the replacement for our creativity,” she emphasized. “You should not be delegating full stop your creative to AI.” The risk of total delegation is a loss of brand identity and a descent into “generic” advertising that fails to resonate with humans. Instead, AI should be viewed as an amplifier—a tool that takes a human-defined concept and scales it across formats. A critical test for any modern PPC asset is its ability to stand alone. In automated campaign types, the AI might choose to show only a single headline, or just one image without a caption. If that individual asset doesn’t clearly communicate the brand’s value proposition, the impression is wasted. Marvin reinforced the need for volume and variety. “You probably need more assets than you currently have,” she noted. In the era of cross-channel campaigns, the AI needs to test hundreds of combinations to find the winning formula. If an advertiser only provides three images and two headlines, they are essentially tying the AI’s hands behind its back. The goal isn’t just to have “AI-generated” content, but to have a human-curated library of high-quality assets that the AI can then optimize. The Strategic Management of Ad Assets With the focus shifting toward asset-heavy campaigns, the way marketers manage their accounts must change. It is no longer about managing “keywords”; it is about managing an “asset portfolio.” Marvin explained that modern AI systems are now capable of evaluating the performance of individual assets

Uncategorized

How to vibe-code an SEO tool without losing control of your LLM

The landscape of software development and digital marketing has undergone a seismic shift with the advent of Large Language Models (LLMs). We use these tools daily—often heavily—to bridge the gap between human intent and machine execution. In fact, research indicates that tech professionals utilize LLMs at twice the rate of the general population, with many spending more than a full day each week interacting with AI interfaces. However, even the most seasoned tech workers face a recurring frustration: the “drift” that happens when an LLM stops following instructions and starts hallucinating or losing the thread of the project. This is the central challenge of “vibe-coding.” Vibe-coding is the process of building software by describing what you want in natural language, letting the AI generate the code, and then iteratively refining the output until it matches your “vibe”—or your specific functional intent. While it sounds simple, building a complex SEO tool without losing control of your LLM requires more than just good prompts; it requires a structured environment, an understanding of context windows, and a rigorous troubleshooting methodology. Choose your vibe-coding environment The era of copy-pasting snippets from a ChatGPT browser window into a text editor is effectively over for anyone serious about building tools. To vibe-code effectively, you need an integrated environment where the LLM has direct access to your file structure. This allows the AI to “see” the entire project, understand how different files interact, and suggest changes that don’t break existing functionality. The current gold standard for this workflow is Cursor. Based on VS Code, Cursor allows you to use models like Claude 4.6 Opus or Gemini 3 Pro directly within your coding environment. For many, the journey starts on a free hobby plan, but as you realize the efficiency of having an AI partner that understands your codebase, moving to a Pro tier becomes almost inevitable. However, Cursor is not the only player in the game. Here are the primary environments currently used by professional vibe-coders: Cursor This is the most popular choice for a reason. Its interface is intuitive, and it offers features like “Composer,” which can write code across multiple files simultaneously. It is highly customizable and integrates seamlessly with your existing terminal and Git workflows. Windsurf The main competitor to Cursor, Windsurf distinguishes itself with its “Flow” feature. It is designed to be more autonomous, capable of running terminal commands to test the code it just wrote and self-correcting based on the error messages it receives. This reduces the “hand-holding” required by the user. Google Antigravity A newer entrant that moves away from the traditional file-tree view. Antigravity focuses on a “fleet of agents” approach, where you direct multiple autonomous agents to build, test, and deploy features. It is built for scale and focuses on high-level direction rather than line-by-line editing. Why prompting alone isn’t enough Many SEOs approach LLMs with the assumption that a “perfect prompt” will result in a perfect tool. This is a misconception. While prompting is important, the real bottleneck in vibe-coding is the “context window.” This refers to the amount of information the model can hold in its active memory at any given time. While modern models like Gemini 3 Pro boast windows of up to 1 million tokens, the quality of retrieval degrades as that window fills up. LLMs utilize attention mechanisms that naturally favor the beginning and end of the provided text. This is known as the “lost in the middle” phenomenon. If you stuff your context window with 50,000 lines of code, the model may forget a crucial instruction you gave it ten minutes ago. To prevent this, you must adopt a “one team, one dream” philosophy: break your project into logical stages and clear the LLM’s memory between them. This ensures the model is always focused on the specific task at hand without being distracted by irrelevant background noise. Furthermore, you must maintain a “trust but verify” mindset. Even when vibe-coding, you should understand the directional options for your project. If the AI suggests a complex scraping library when a simple API call would suffice, you need the foundational knowledge to steer it back on track. Troubleshooting should always involve asking the model to explain its logic before it executes a fix. Tutorial: Let’s vibe-code an AI Overview question extraction system To demonstrate the power of structured vibe-coding, we will walk through the creation of an SEO tool designed to extract the implied questions answered by Google’s AI Overviews (formerly SGE). In the modern SEO environment, ranking often depends on answering the specific questions Google deems relevant enough to highlight in its generative summaries. By building a tool that extracts these questions, you can create content that is perfectly aligned with Google’s current understanding of a topic. Step 1: Planning and Brainstorming Before touching a code editor, you must define the logic of your system. It is often helpful to use a standard LLM interface like Gemini or ChatGPT to map out the architecture. Start with a high-level description of your goal and the necessary steps. For our AI Overview extractor, the plan looks like this: Select a target search query. Conduct a search and extract the AI Overview content using a reliable API. Pass that content to an LLM to identify the implied questions. Save the questions and the source snippets to a permanent log for analysis. During this phase, ask your LLM to be critical. Ask it to suggest the simplest path and warn you about potential pitfalls like bot detection. For instance, rather than trying to build a custom scraper that Google will quickly block, use a service like SerpAPI. It handles the proxy management and DOM parsing for you, providing a clean JSON output of the AI Overview. Step 2: Setting the Groundwork in Cursor Once you have a plan, open Cursor and set up your project. One of Cursor’s strengths is the ability to toggle between different models. For the initial setup, a reasoning model like Claude 4.6 Opus

Uncategorized

Emina Demiri talks surviving firing your biggest client

The dream for many digital agencies is to land that one “whale” client—the massive brand that brings in enough revenue to scale the team, move into a fancy office, and establish market authority. But what happens when that whale becomes an anchor dragging the entire ship down? This was the central theme of a recent discussion on episode 352 of PPC Live The Podcast, where Emina Demiri Watson, Head of Digital at Brighton-based Vixen Digital, shared a raw and honest account of one of the most difficult moves a business can make: firing a client that represented 70% of their total revenue. In the world of agency growth and digital marketing, the conversation is usually focused on acquisition, scaling, and retention. Rarely do we talk about the “surgical removal” of revenue for the sake of survival. Emina’s story isn’t just a cautionary tale about financial planning; it is a masterclass in leadership, cultural preservation, and the technical realities of managing high-stakes PPC accounts in 2026. The Slow Decay: When a Strategic Partnership Turns Toxic Agency-client relationships rarely implode overnight. Usually, they erode. For Vixen Digital, the relationship with their largest client had been deteriorating for approximately three months before the final decision was made. According to Emina, it wasn’t a case of the client being difficult from the start. Instead, the dynamic shifted gradually, souring until it became unrecognizable from the partnership that began years prior. This “slow burn” is often more dangerous than an explosive conflict. When a relationship is toxic from day one, it’s easy to walk away. But when a long-term partner begins to change—perhaps due to internal pressure, new management, or shifting priorities—agencies often fall into the trap of “sunk cost fallacy.” They hope that if they just work a little harder or deliver a slightly better ROAS, the relationship will revert to its former glory. At Vixen Digital, the breaking point came when the leadership team realized that the toxic atmosphere wasn’t just staying in the boardroom; it was infecting the team. When your staff dreads opening Slack or answering a specific client’s email, the cost of that client exceeds the value of their retainer. Leadership decided that their internal culture and team well-being were more important than a massive, yet corrosive, line item in the budget. The Hidden Danger of Customer Concentration Once the decision was made to part ways, the Vixen Digital team had to face a harsh reality. In the hustle of day-to-day operations and campaign management, it is easy to lose sight of the broader financial structure of the agency. When they sat down to run the numbers, they discovered a glaring vulnerability: customer concentration. Customer concentration is a financial metric that measures how much of your revenue comes from a single source. In a healthy agency model, you want a diversified portfolio where no single client can sink the company if they leave. Vixen Digital discovered that this one client accounted for roughly 70% of their revenue. This realization often comes as a shock because agencies tend to focus on “total revenue” rather than “revenue distribution.” When you are busy hitting targets and managing complex PPC campaigns, sophisticated financial audits often take a backseat to campaign performance. It took a simple Excel formula to reveal that the agency had accidentally built its entire foundation on a single pillar. The lesson here for any agency owner or freelancer is clear: audit your revenue distribution quarterly. If one client makes up more than 20-25% of your business, you are in a high-risk zone. Warning Signs Beyond the Dashboard Emina’s experience highlights a critical skill for digital marketers: the ability to read signals that exist outside of Google Ads or Meta Business Suite. Too often, PPC managers focus solely on campaign performance—CPCs, conversion rates, and ROAS. However, Emina suggests that the real warning signs of a failing partnership are often found in the client’s internal operations. Agencies should be on the lookout for several key indicators that a relationship is at risk: 1. Corporate Restructuring and Team Changes When a client undergoes a massive internal reorganization, the agency is often the first “legacy” vendor to be questioned. New marketing directors or CEOs often want to bring in their own preferred partners or move operations in-house to prove their value. If your point of contact changes frequently, the institutional knowledge of your value is being lost. 2. Operational Failures Affecting Marketing Marketing cannot exist in a vacuum. Emina noted that factors like security breaches or technical glitches on the client’s side can prevent leads from converting downstream. If an agency is driving high-quality traffic but the client’s infrastructure is failing to capture it, the agency is often unfairly blamed for a “drop in performance.” 3. Communication Breakdown A shift in tone, a decrease in transparency, or a sudden lack of feedback on strategy are all precursors to a “firing.” When a client stops treating the agency as a strategic partner and begins treating them as a transactional vendor, the clock is ticking. The Long Road to Recovery: How to Claw Back Revenue Firing 70% of your revenue is a terrifying prospect, but Vixen Digital’s recovery strategy provides a roadmap for others facing similar situations. According to Emina, the recovery was built on three foundational pillars: Rigorous Financial Tracking Moving forward, the agency implemented better systems to track customer concentration. They ensured that no single account could ever hold that much power over the agency’s survival again. This meant being more intentional about the types of clients they pursued and how they balanced their portfolio. Returning to Core Values When you lose a major client, there is a temptation to panic and take on any new business that comes your way, regardless of fit. Emina argues that the better approach is to use your company values as a compass. By sticking to their values, Vixen Digital was able to attract clients that were a better cultural fit, ensuring long-term stability rather than a quick,

Uncategorized

AI agents in SEO: A practical workflow walkthrough

The Evolution of Automation in Search Engine Optimization For over a decade, automation has been the backbone of high-performance SEO operations. We began with simple scripts to check for 404 errors and moved into sophisticated reporting dashboards that pulled data from Google Search Console and Ahrefs. However, we are currently witnessing a paradigm shift. We are moving away from simple “if-this-then-that” automation and into the era of agentic workflows. AI agents represent a significant leap forward. Unlike traditional automation tools that follow a rigid, linear path, AI agents leverage Large Language Models (LLMs) to interpret data, make decisions, and execute multi-step processes across different software environments. In the world of SEO, this means moving beyond just “pulling data” to “interpreting data and taking action.” Platforms like n8n have emerged as the leading choice for SEO professionals who want to build these intelligent systems without needing a PhD in computer science. Understanding n8n: The Orchestrator for Modern SEO To understand why AI agents are transformative, we must look at the platform often used to build them. n8n is an extendable workflow automation tool that serves as a bridge between your various SEO tools, databases, and AI models. Think of it as a highly sophisticated version of Zapier, but with one critical difference: it is designed for technical users who need granular control over their data and logic. In a standard automation, you might send a new lead from a form to a spreadsheet. In an agentic workflow powered by n8n, the system receives the form data, uses an LLM to research the lead’s website, determines the most likely SEO pain points based on the site’s meta tags, and then drafts a personalized outreach email that references specific technical flaws. This isn’t just moving data; it is synthesizing intelligence. Choosing Between Cloud-Hosted and Self-Hosted Deployment One of the first decisions an SEO team must make when adopting n8n is how to deploy it. This choice significantly impacts the flexibility of your AI agents. There are two primary paths: The Cloud-Hosted Route: This is the “Software as a Service” (SaaS) model. It is managed by the n8n team, meaning you don’t have to worry about server maintenance, security patches, or infrastructure updates. It is the quickest way to get started, but it comes with limitations. The environment is “sandboxed,” meaning you have restricted access to the underlying file system. You also cannot easily install community-created nodes, which are often the secret sauce for niche SEO tasks. The Self-Hosted Route: For SEO agencies and enterprise teams, self-hosting on a platform like DigitalOcean, AWS, or a private server is often the preferred choice. While it requires more technical expertise to set up, it removes the “sandbox” restrictions. You can recode the server to interact with workflows in custom ways—such as saving large crawl files directly to a database—and use community nodes developed by other SEO engineers. Additionally, for large-scale operations, the cost of self-hosting is often significantly lower than paying for high-tier cloud credits. Essential Ingredients: API Credentials and LLM Integration An AI agent is only as smart as the model powering it. Regardless of which n8n package you select, the “brain” of your agent will require API credentials from providers like OpenAI (GPT-4o), Anthropic (Claude 3.5 Sonnet), or Google (Gemini). These models are not free; they operate on a pay-per-token basis, so managing your workflow’s efficiency is vital to keeping costs under control. Within the n8n interface, building an agent is a visual process. You work on a canvas, adding “nodes” that represent different actions. A workflow might begin with a trigger node—such as a scheduled time or an incoming webhook—and then flow into an AI node where the logic happens. The output can then be funneled into communication tools like Slack, Microsoft Teams, or Gmail, or even pushed directly into a Content Management System (CMS) like WordPress. A Practical Walkthrough: The SEO News Summarizer Agent To see an AI agent in action, let’s walk through a practical workflow designed to solve a common problem: staying updated with the rapid-fire changes in the search industry. Most SEOs follow dozens of blogs, but few have time to read every update. An AI agent can act as a digital research assistant, filtering the noise and delivering only the signal. Step 1: The Trigger and Scraper Nodes The workflow begins with an RSS Feed Read node. This node is configured to pull the latest headlines and content from major SEO news publishers. Instead of a human manually checking these sites, the agent does it every morning at a scheduled time. To make the system more interactive, you can also use a Webhook node. This allows a team member to trigger the process on demand. For example, by typing a specific command in a Microsoft Teams channel, an outgoing webhook notifies n8n to start the scraping process immediately. Step 2: Structuring Data for the LLM Raw RSS data is often messy. Before the AI can process it, the data needs to be structured. n8n passes data between nodes as JSON (JavaScript Object Notation). By using the “Schema” view in the interface, developers can debug the data flow, ensuring that the title, URL, and full text of the articles are correctly identified. This structured data is then passed into the AI Agent node. Step 3: The Role of System and User Prompts This is where the “intelligence” happens. The AI Agent node in n8n typically uses two types of prompts to guide the LLM: The System Prompt: This defines the agent’s persona and permanent rules. For an SEO news agent, the system prompt might say: “You are a senior SEO strategist with 15 years of experience. Your task is to analyze news articles and identify how these changes specifically impact organic search rankings, technical SEO, and content strategy. Use a professional tone and format your output in Markdown.” The User Prompt: This is dynamic. It pulls in the actual content scraped in the first step. It might look

Uncategorized

Google now attributes app conversions to the install date

The Evolution of App Attribution: A Significant Shift for Google Ads In the fast-paced world of mobile app marketing, data is the primary currency. For years, digital marketers and mobile growth experts have grappled with the complexities of attribution—the process of determining which marketing touchpoint led to a specific action, such as a download or a purchase. One of the most persistent challenges in this space has been the discrepancy between different measurement platforms. Google has recently taken a monumental step toward resolving these issues by updating how it attributes conversions within app campaigns. Google is officially shifting its attribution methodology from the date of the ad click to the date of the actual app install. While this may sound like a minor technical adjustment, its implications for campaign optimization, budget allocation, and data reconciliation are profound. This move represents a modernization of Google’s advertising infrastructure, bringing it into closer alignment with industry standards and providing advertisers with a clearer, more actionable picture of their campaign performance. Understanding the Shift: Click Date vs. Install Date To appreciate the impact of this change, it is essential to understand the “before and after” of Google’s attribution logic. Historically, Google Ads operated on a model where a conversion was credited to the date the user interacted with the advertisement. For example, if a user clicked an ad for a mobile game on Monday but did not actually download and open the app until Thursday, Google would retroactively record that conversion as occurring on Monday. Under the new system, that same conversion is now attributed to Thursday—the day the app was actually installed and opened for the first time. This shift changes the chronological flow of data in the Google Ads dashboard. Instead of looking backward to tie actions to historical clicks, Google is now focusing on the moment the value is actually realized: the install. This change addresses a fundamental “lag” in reporting. In the previous model, a marketer looking at their data for a specific day might see conversion numbers fluctuate for weeks as late-installing users were retroactively added to that day’s totals. By switching to the install date, the data becomes more “fixed” and reflective of real-time user activity. Bridging the Gap Between Google Ads and MMPs One of the most significant pain points for mobile advertisers has been the constant discrepancy between Google Ads reporting and data from Mobile Measurement Partners (MMPs) like AppsFlyer, Adjust, Branch, and Kochava. MMPs are third-party tools used to verify app installs and track user behavior across multiple platforms. For years, it was common—and frustrating—to see Google Ads reporting one number while the MMP reported another. A primary reason for this mismatch was the difference in attribution logic. Most MMPs have long used the install date as the primary anchor for their reporting. Because Google was using the click date, marketers were forced to perform complex manual reconciliations to understand their true Return on Ad Spend (ROAS). By adopting the install date as the standard, Google is effectively speaking the same language as the rest of the mobile ecosystem. This alignment reduces the “data fog” that often plagues high-spending app campaigns and allows marketing teams to trust their dashboards without needing a secondary spreadsheet to “translate” the numbers. The Impact on Smart Bidding and Machine Learning Beyond simple reporting, the most critical benefit of this update lies in how it fuels Google’s machine learning algorithms. Google App Campaigns (formerly known as Universal App Campaigns or UAC) are heavily automated. They rely on “Smart Bidding,” where Google’s AI analyzes thousands of signals to determine how much to bid for a specific user to achieve a target Cost Per Acquisition (CPA) or ROAS. Machine learning thrives on fresh, timely data. Under the old click-based model, there was often a significant “attribution lag.” If a user clicked an ad but waited several days or even weeks to install the app, the signal that the ad was successful was delayed. This delay meant the algorithm was often “starved” of conversion signals, making it slower to learn which audiences or creative assets were actually working. By tying conversions to the install date, Google’s Smart Bidding receives signals much faster. The algorithm no longer has to wait for a 30-day window to close before it fully understands the value of a specific campaign segment. This leads to several performance improvements: Faster Optimization: Campaigns can move out of the “Learning Phase” more quickly because conversion signals are being processed in a more linear, timely fashion. More Stable Performance: With more consistent data entry, bidding algorithms are less likely to overreact to perceived “dry spells” that were actually just reporting lags. Better Budget Allocation: Google’s AI can more accurately shift budget toward the ads that are driving immediate installs, rather than waiting for retrospective data to populate. The 30-Day Attribution Window Problem The default attribution window for Google App Campaigns is typically 30 days. This means that if someone clicks an ad, Google will count it as a conversion as long as the install happens within that month-long period. While a long window is helpful for capturing the full journey of a cautious user, it created a “silent drag” on performance optimization under the old system. When conversions are backdated to a click that happened 25 days ago, that data point is essentially “stale” for the purposes of real-time bidding. The algorithm is trying to optimize for what is happening *now*, but it is being fed information about a user intent that existed nearly a month ago. By shifting the credit to the install date, Google ensures that the conversion signal is relevant to the current market conditions and the current state of the campaign’s optimization. Many advertisers never adjust these default windows, meaning they were unknowingly operating under a system that delayed their own success. This update essentially “optimizes the optimizer” by ensuring the default behavior of the platform is more conducive to modern, fast-moving mobile markets. What This Means for

Uncategorized

How to use GA4 and Looker Studio for smarter PPC reporting

In the high-stakes world of performance marketing, data is far more than a simple end-of-month report card. It is the tactical roadmap that determines where every dollar of your budget should flow. To navigate this landscape successfully in 2026, marketers must move beyond the constraints of default tools. If you are still relying solely on the built-in reporting interfaces of Google Analytics 4 (GA4), you are likely struggling to tell a cohesive story to stakeholders and missing the granular insights needed to truly optimize your spend. The solution lies in the synergy between GA4 and Looker Studio. While GA4 is a powerhouse for data collection and behavioral analysis, Looker Studio acts as the visualization layer that transforms raw numbers into decision-grade insights. By integrating these platforms, you can build interactive dashboards that do more than just display clicks; they drive real-world campaign improvements. This guide explores how to leverage GA4 and Looker Studio for smarter PPC reporting, covering everything from technical integrations to specific use cases like budget pacing and waste-reduction audits. GA4 vs. Looker Studio: Understanding Their Unique Roles Before diving into the technical setup, it is essential to understand the distinct roles these two platforms play in a modern tech stack. GA4 is your primary “source of truth” for user behavior. It tracks how people interact with your website or app, utilizing a flexible, event-based model that records every click, scroll, and conversion. It also features a dedicated Advertising workspace that pulls in metrics from Google Ads. However, GA4 is fundamentally designed for data collection and deep-dive analysis—not for the polished, high-level reporting that clients and executives demand. Looker Studio (formerly Data Studio) serves as your reporting headquarters. It connects to over 800 different data sources, allowing you to aggregate information from Google Ads, Microsoft Ads, Meta, TikTok, and even offline CRM data into a single, unified view. In the context of 2026, the functional differences between these tools have become even more pronounced. Data Sources and Integration Capabilities GA4 focuses heavily on on-site analytics. A significant update in late 2025 saw Google finally roll out native integration for Meta and TikTok Ads. This allows for the automatic import of cost, clicks, and impressions directly into GA4 without the need for third-party middleware. While this was a major step forward, the integration remains somewhat rigid. It requires meticulous UTM matching and lacks the ability to “clean” campaign naming conventions or distinguish between platform-specific conversion values, such as Facebook Leads versus GA4 Conversions. Looker Studio excels where GA4 falls short. It provides the flexibility to blend these disparate data sources or connect to platforms that GA4 still does not support natively, such as LinkedIn Ads or specialized industry portals. In Looker Studio, you can map different campaign names to a single unified category, ensuring your cross-channel reporting is clean and accurate. Advanced Metrics and Calculated Fields The reporting UI within GA4 has seen substantial improvements, now allowing for up to 50 custom metrics per standard property—a significant jump from the previous limit of five. However, these metrics are often static. If you need to see a metric that isn’t pre-defined, you are often out of luck within the GA4 interface. Looker Studio introduces the power of calculated fields. This feature allows you to perform complex mathematical operations on your data in real-time. For example, you can calculate true profit by subtracting ad spend and COGS (Cost of Goods Sold) from revenue, or you can create custom “Engagement Scores” by weighting different session metrics. These calculations happen at the report level, meaning you never have to alter your underlying source data in GA4. The Power of Data Blending Data blending is perhaps the most compelling reason to use Looker Studio for PPC. It allows you to join tables from different sources based on a common key, such as a date or a campaign ID. For enterprise users, Looker Studio Pro now offers LookML models for robust data governance, but even the standard free version provides the flexibility needed to match top-of-funnel ad spend with bottom-of-funnel conversions from a CRM. This creates a full-funnel view that GA4 simply cannot replicate on its own. Why Looker Studio is Essential for Modern PPC Teams For a PPC team to be effective, they need to see the “why” behind the numbers. Looker Studio facilitates this by moving beyond the limitations of flat spreadsheets and standard analytics views. 1. Creating a Unified, Cross-Channel View Most modern marketing strategies involve a mix of intent-based search (Google Ads, Microsoft Ads) and awareness-based social media (Meta, TikTok). Checking these platforms individually leads to fragmented strategy and “siloed” thinking. A Looker Studio dashboard acts as a single source of truth, blending these channels into a comparative view. You might find, for instance, that while X Ads drives 18% of your traffic, Microsoft Ads accounts for 16% but has a 25% higher conversion rate. Seeing these metrics side-by-side allows for more intelligent budget allocation. 2. Visualizing Creative Performance with the IMAGE Function In visual-heavy industries like real estate, automotive, or e-commerce, the creative is often more important than the targeting. A standard report telling a client that “Ad_Group_B” has a high click-through rate (CTR) is abstract and unhelpful. By using the IMAGE function in Looker Studio, you can pull the actual ad image URL into your reporting tables. This allows stakeholders to see the exact photo or video that is driving performance, making the data tangible and easier to act upon. 3. Analyzing Post-Click Behavior A high CTR is a vanity metric if those users are bouncing immediately upon hitting the landing page. By bringing GA4’s behavioral data into your PPC reports, you can connect the ad click to the subsequent action. You might discover a specific campaign with a low cost-per-click (CPC) but a 100% bounce rate. Looker Studio allows you to visualize “Engaged Sessions per Click,” which is a far more accurate barometer of lead quality than simple traffic volume. 4. Custom Metrics for Specific Business Goals

Scroll to Top