Author name: aftabkhannewemail@gmail.com

Uncategorized

Google brings vehicle feeds to Search campaigns

The Evolution of Automotive Search Advertising The automotive industry has always been a unique challenge for digital marketers. Buying a vehicle is one of the most significant financial decisions a consumer makes, involving a complex journey that spans weeks or even months of research. Historically, Google Search ads for car dealerships were limited to text-based formats. While effective for brand awareness and broad queries, these text ads often lacked the visual impact and specific data needed to convert a high-intent shopper who wants to see exactly what is on the lot. Google has officially addressed this gap by bringing vehicle feeds into Search campaigns. This update represents a major shift in how automotive inventory is displayed across the Google ecosystem. By integrating product-rich data directly into standard text ads, Google is blurring the lines between traditional search marketing and the visual, data-driven world of Google Shopping. This change allows advertisers to showcase real-time inventory—including images, pricing, and specific model details—directly within the Search Engine Results Page (SERP). Understanding Vehicle Feeds in Search Campaigns At its core, this update allows automotive advertisers to link their Google Merchant Center vehicle feeds directly to their Search campaigns. Previously, vehicle inventory ads (VIAs) were often treated as a distinct category or required specific campaign types to function effectively. Now, these visual assets can be integrated as extensions of existing Search ads. When a user searches for a specific vehicle or a broader category—such as “SUVs for sale near me” or “2024 Toyota RAV4 price”—Google can now pull specific listings from the advertiser’s feed. These listings appear as clickable assets alongside or below the main text ad. This creates a hybrid ad format that combines the persuasive power of a well-written headline with the immediate utility of a product listing. For the consumer, this means a more informative experience. Instead of clicking a generic link and having to navigate a dealership’s website to find a specific car, they can see the car, the price, and the location before they even leave the Google search results page. For the advertiser, it means a higher likelihood that the traffic hitting their site is deeply interested in a specific piece of inventory. How the Integration Works: From Merchant Center to the SERP The technical backbone of this feature is the Google Merchant Center (GMC). For years, GMC has been the primary tool for e-commerce retailers to manage product data for Shopping ads. Its expansion into the automotive sector has been gradual, but the ability to funnel this data into Search campaigns is a significant milestone. To utilize this feature, advertisers must have a verified vehicle feed in the Merchant Center. This feed contains crucial data points for every vehicle in stock, including the Vehicle Identification Number (VIN), make, model, year, mileage, price, and high-quality image URLs. Once the feed is active and approved, it can be linked to Google Ads. In the Google Ads interface, these vehicle listings function similarly to other ad assets (formerly known as extensions). Google’s machine learning algorithms take over from there. The system analyzes the user’s query, their location, and their search history to determine which vehicles from the feed are the most relevant. If a user is looking for “pre-owned trucks,” the ad will dynamically populate with the most relevant trucks currently available in the advertiser’s inventory. The Shift Toward Visual-First Search One of the primary reasons this update is notable is the continued move toward a visual-first search experience. In the past, Google Search was a sea of blue links. Over time, we have seen the introduction of image extensions, video assets, and now, full product carousels within text ads. For the automotive sector, visuals are non-negotiable. A buyer needs to see the condition, color, and trim of a vehicle before they take the next step toward a test drive. By bringing these visual elements into the Search campaign, Google is providing “Shopping-style” elements without requiring the advertiser to manage a separate Shopping campaign structure. This simplifies the workflow for digital marketing managers while simultaneously increasing the real estate their ads occupy on the screen. A search ad with a vehicle feed asset is physically larger and more eye-catching than a standard three-line text ad, which can lead to higher click-through rates (CTR) and better brand recall. Key Benefits for Automotive Advertisers The transition to vehicle feeds in Search campaigns offers several strategic advantages for dealerships and automotive groups. Understanding these benefits is essential for maximizing the return on ad spend (ROAS) in an increasingly competitive market. 1. Direct Inventory Showcasing The most immediate benefit is the ability to show real inventory. In the past, if a dealership had a specific sale on a few units, they would have to manually update their ad copy to reflect that. With vehicle feeds, the inventory is updated automatically. If a car is sold and removed from the feed, it stops appearing in the ads. This ensures that advertising dollars are never spent promoting a vehicle that is no longer available. 2. Improved Lead Quality Transparency leads to better leads. When a user clicks on a vehicle feed asset, they already know the price, the look of the car, and the basic specs. This means the users who eventually fill out a contact form or call the dealership are further down the sales funnel than those who click on a generic “View Inventory” button. They aren’t just looking for “a car”; they are interested in “that car.” 3. Seamless User Experience The “Click type” functionality allows for deep-linking. Depending on how the ad is configured, a click on a specific vehicle image can take the user directly to the Vehicle Detail Page (VDP) on the dealership’s website. By reducing the number of clicks between the search query and the product details, advertisers can significantly reduce bounce rates and improve the conversion rate on their landing pages. 4. Automation and Efficiency Managing individual keywords for every single car on a lot is an impossible

Uncategorized

Where to focus technical SEO when you can’t do it all

The Growing Challenge of Enterprise Technical SEO Technical SEO is often the silent engine behind a successful organic search strategy. When it functions correctly, search engines discover, crawl, and index your content effortlessly, allowing your high-quality pages to rank for competitive queries. However, when technical issues accumulate, the entire SEO program can stall. Even the best content cannot overcome a website that is fundamentally difficult for search engines to process. In the current search landscape, technical SEO remains a top priority for both Google and industry leaders. According to Backlinko’s 2026 Google ranking factors report, technical health is more closely correlated with high rankings than ever before. Yet, despite its importance, in-house SEO teams face a persistent hurdle: a lack of development resources. In many organizations, the technical roadmap is crowded with product features, security patches, and UX overhauls, leaving SEO tasks at the bottom of the pile. The data highlights this friction clearly. Aira’s State of Technical SEO Report indicates that up to 67% of SEO professionals cite non-SEO development tasks as the primary reason technical changes fail to reach production. This isn’t just a workflow issue; it’s a massive financial drain. Estimates from seoClarity suggest that these technical bottlenecks cost businesses an average of $35.9 million in potential revenue each year. When you cannot do everything, you must do what matters most. Prioritization: How to Identify High-Impact Wins The sheer scale of enterprise websites—often spanning millions of URLs—makes a “fix everything” approach impossible. Prioritization becomes the most critical skill for a technical SEO. You need a framework that separates high-impact actions from busy work. Aira’s research suggests a hierarchy of prioritization that many top-tier SEOs follow: Quick Wins: Tasks that require minimal development effort but yield significant visibility or ranking gains. KPI Impact: Changes directly tied to revenue-generating pages or core business objectives. User Impact: Technical fixes that improve the actual experience for the human visitor (e.g., page speed or mobile usability). Google Guidelines: Aligning the site with the latest foundational best practices from Google Search Central. Industry & Algorithm Changes: Adapting to new search technologies, such as AI Overviews or shifts in Core Web Vitals. To narrow your focus, consider applying the Eisenhower Matrix to your technical backlog. This tool categorizes tasks into four quadrants: Urgent/Important, Not Urgent/Important, Urgent/Not Important, and Not Urgent/Not Important. Your focus should almost always remain on the “Important” categories, particularly those that eliminate barriers to ranking. If a page isn’t indexed, it cannot rank. If a page is buried too deep, it cannot accumulate authority. By starting with a technical SEO audit, you can generate a data-driven list of tasks that move the needle. 1. Site Architecture and Strategic Siloing Site architecture is the blueprint of your digital presence. A well-organized structure does more than just help users navigate; it provides search engines with a clear map of your content’s hierarchy and topical authority. Fundamentally, site architecture—often referred to as “SEO siloing”—organizes a website around the specific ways people search. The Logic of Topical Siloing The goal of siloing is to group related content together, creating a thematic depth that signals to Google that you are an authority on a specific subject. For an ecommerce site selling power tools, for example, a siloed structure might look like this: Home > Power Tools > Drills > Cordless Drills. Each level of the hierarchy reinforces the parent category, funneling “link equity” and topical relevance throughout the section. In the age of AI-powered search, organization is even more critical. Large Language Models (LLMs) and search algorithms rely on clear signals to understand the relationship between different pages. A site with a strong internal linking structure and a logical hierarchy sends much stronger relevance signals than a site with a flat or disorganized structure. Common Architecture Red Flags When resources are limited, look for these specific architecture failures that actively harm your performance: Deeply Buried Pages: If a high-value page is more than four clicks away from the homepage, it is likely receiving very little “link juice” and may be crawled infrequently. Orphaned Pages: These are pages with no internal links pointing to them. To a search engine, an orphaned page appears unimportant or irrelevant. Topic Cannibalization: Having multiple pages competing for the same core query confuses search engines and dilutes your ranking power. Fragmented Supporting Content: Blog posts or guides that are not linked to their corresponding product or service pages are a missed opportunity to build topical authority. Action Items for Low-Resource Environments If a full structural overhaul is off the table, focus on these three high-impact maneuvers: Internal Link Reinforcement: You don’t need to change URL structures to improve internal linking. Add contextual links from high-authority blog posts to your primary revenue-driving landing pages. Content Consolidation: Instead of managing ten thin pages on a single topic, merge them into one “Power Page” and use 301 redirects to consolidate the authority of the old URLs. Elevation: Ensure your top 20% of revenue-generating pages are within two to three clicks of the homepage. Sometimes a simple change to the global footer or a “featured products” section on the homepage is all it takes. 2. Mastering Crawling and Indexing For enterprise-level websites, crawling and indexing are not guaranteed. Google has a limited “crawl budget” for every site, meaning its bots will only spend a certain amount of time and energy fetching your pages. If your site is bloated with low-value URLs, Googlebot might spend its energy on the wrong things, leaving your most important content unindexed. Prioritizing Indexing Issues The first step is always to fix what is broken. Use the Google Search Console (GSC) Page Indexing report to see which URLs are being excluded. A useful shortcut is to filter this report by your XML sitemap. If a URL is in your sitemap, it means you have told Google it is important. If Google is still refusing to index it, you have a high-priority problem. Triage these issues by checking for accidental noindex tags,

Uncategorized

Local content playbook: From service pages to jobs-to-be-done pages

The Hidden Visibility Problem in Local SEO Local SEO has historically focused on a very specific moment in the buyer’s journey: the moment of intent. Most digital marketing strategies for local businesses are built to capture users searching for “near me” terms or specific service-plus-city keywords. While these are high-converting queries, focusing solely on them creates a massive visibility problem that most businesses don’t even realize they have. The gap exists in the moments leading up to that final search. Before a homeowner searches for “emergency plumber in Brookline,” they are standing in their kitchen looking at a sink that won’t drain. Before a business owner searches for “commercial HVAC repair,” they are dealing with a thermostat that is throwing a cryptic error code. By the time a user searches for a service provider by name, they have already diagnosed their problem. If your website only caters to the diagnosis, you are missing the entire research and discovery phase where trust is actually built. This is where the local content playbook must evolve from simple service pages to “Jobs-to-be-Done” (JTBD) pages. Service-First Site Structures Miss Real Search Behavior Traditionally, local service websites follow a predictable hierarchy. There is a homepage, followed by a handful of service pages (e.g., Drain Cleaning, Leak Detection, Pipe Repair), and then a series of location-specific landing pages. From a technical SEO perspective, this is clean and logical. It aligns with how Google’s local algorithm has historically functioned, rewarding businesses that clearly state what they do and where they do it. However, this structure mirrors how the *business* thinks, not how the *customer* thinks. A business thinks in terms of its service menu. A customer thinks in terms of their immediate, often stressful, reality. Consider the search journey of someone with a failing furnace. They don’t always start by searching for “furnace replacement.” Their journey often begins with queries like: “Why is my furnace making a whistling noise?” “Furnace blowing cold air but heat is on.” “How to reset a furnace pressure switch.” If your website only has a page titled “Furnace Repair Services,” you are likely invisible for these “symptom-first” searches. Even if you do rank, a generic service page often fails to answer the user’s immediate question, leading them to bounce back to the search results to find a more helpful resource. This mismatch is the primary reason many local sites underperform despite having strong backlink profiles or high-quality technical setups. What is a Jobs-to-be-Done Page? The “Jobs-to-be-Done” framework is a concept borrowed from product development and innovation theory. It suggests that customers don’t “buy” products or services; they “hire” them to do a specific job. In the context of local SEO, a JTBD page is a hybrid of an informational guide and a high-converting landing page. It is a “help + hire” asset. Its primary goal is to help the reader understand their current situation, evaluate their options, and decide on a smart next step—which, ideally, involves contacting your business. While a JTBD page might look like a blog post at first glance because it is text-heavy and educational, its intent is fundamentally different. A blog post is often designed for broad awareness or top-of-funnel traffic. A JTBD page is designed for decision support. It targets users who have a specific problem and are looking for a solution, even if they aren’t yet sure what that solution is. Why Service Pages Still Matter (But Aren’t Enough) To be clear, service pages are not obsolete. They remain the heavy hitters for “bottom-of-funnel” searches. When someone knows they need a specific service, they look for: “Best [service] near me” “[Service] prices in [City]” “[Service] reviews [City]” Service pages are built to convert these “hire-ready” users by highlighting experience, showcasing reviews, and offering clear calls to action. The problem is that the market for these “ready to hire” users is incredibly competitive and expensive to capture via PPC or organic search. JTBD pages allow you to enter the conversation earlier. When you help a user understand why their kitchen sink is gurgling, you establish authority and empathy before they even look for a plumber. By the time they realize the job requires a professional, you are the obvious choice because you’ve already provided value. The JTBD Structure That Consistently Converts A successful JTBD page follows the psychological sequence of a person in a mini-crisis. It doesn’t start with a sales pitch; it starts with a mirror of the user’s experience. To build a page that ranks and converts, follow this five-step structure. 1. Start with Symptoms, Not Marketing The opening of your page should confirm that the reader is in the right place. Instead of starting with “We have 20 years of experience in plumbing,” start with the physical signs the user is seeing. If the page is about a slow-draining sink, list the symptoms: the water pooling at the bottom, the strange gurgling sounds, or the unpleasant smell coming from the drain. This creates an immediate “That’s exactly what’s happening to me” moment. Directly after this section, offer a “triage” call to action. For example: “If your sink is currently overflowing and causing water damage, call our emergency line now. If it’s just draining slowly and you want to know why, keep reading.” This captures the urgent leads immediately while retaining the researchers. 2. Explain Likely Causes Without Remote Diagnosis Readers want to know *why* something is happening, but they also know you can’t see through their pipes from a website. Use conditional reasoning to explain potential causes. For example: “If only your kitchen sink is slow, the clog is likely in the P-trap or the local branch line. If every drain in your house is slow, you likely have a more serious issue in your main sewer line.” This type of “if/then” logic is incredibly helpful for the user. It signals that you are an expert who understands the nuances of the problem, and it helps them narrow down the severity of their

Uncategorized

30-day vs. 7-day attribution in Google Ads: What the shorter window revealed

In the world of digital advertising, data is often treated as an absolute truth. However, the lens through which we view that data—our attribution model—can fundamentally change the story the numbers tell. For years, the 30-day click-through attribution window has been the “set it and forget it” default for Google Ads campaigns. Most advertisers accept this setting without a second thought, assuming that a wider window provides more data and, therefore, better optimization. But what happens when the default settings contradict the actual behavior of your customers? For many businesses, especially in the fast-moving Direct-to-Consumer (DTC) space, the journey from first click to final purchase doesn’t take a month; it takes a matter of hours or days. When your attribution window is misaligned with your sales cycle, you aren’t just looking at “extra” data—you are potentially feeding your bidding algorithms noisy, low-quality signals that lead to inefficient spending. This article explores a deep-dive case study of a DTC retailer that challenged the status quo. By shifting from a 30-day to a 7-day attribution window, they uncovered significant discrepancies in how Google and Meta were claiming credit for sales, eventually leading to a more profitable and transparent marketing mix. The Problem with the 30-Day Default The 30-day attribution window is designed to capture the “long tail” of the customer journey. It assumes that a user might click an ad today, browse for three weeks, and then finally return to buy. While this makes sense for high-ticket items like enterprise software or luxury automobiles, it is often overkill for impulse-driven retail or competitive DTC products. In this specific case, the client was a DTC retailer operating in an intensely competitive landscape. Despite the aggressive market, an analysis of their internal data revealed a startling fact: their average conversion lag was just 2.2 days. Most of their customers were making a decision almost immediately. By keeping a 30-day window active, Google Ads was allowed to claim credit for a sale even if the last interaction happened nearly a month prior. This created a “credit-hogging” environment where Google Ads and Meta Ads (where the client spent the majority of their budget) were both claiming the same conversions. When both platforms claim 100% credit for a single sale, the reported Return on Ad Spend (ROAS) becomes an illusion, making it impossible for the business to see the true incremental impact of their spend. The Overlap Conflict: Google vs. Meta Most modern brands utilize a multi-channel approach. In this scenario, Meta Ads was the primary driver of top-of-funnel awareness and initial discovery. However, because Google Ads was set to a 30-day window, any user who had clicked a Google Search or Shopping ad at any point in the previous month was being “claimed” by Google when they eventually converted—even if a Meta ad was the actual catalyst for that day’s purchase. This lack of clarity meant the client was flying blind. They couldn’t tell which platform was actually driving new growth and which was simply retargeting users who were already going to buy. To fix this, a radical change in the attribution window was necessary to “tighten” the feedback loop. The 7-Day Attribution Test Strategy Moving from a 30-day to a 7-day window is not a change to be made lightly. Because Google Ads’ Smart Bidding relies on historical conversion data to set bids, a sudden change in how conversions are reported can send the algorithm into a tailspin. If the “primary” conversion action changes, the learning phase resets, and performance can fluctuate wildly. To mitigate this risk, the team implemented a phased transition strategy throughout January. The goal was to ensure the bidding algorithm had enough data to recalibrate without losing momentum. Step 1: The Secondary Conversion Action The first step was to create a duplicate of the primary purchase conversion. This new action was identical in every way, except the click-through window was set to 7 days instead of 30. Crucially, this was set as a “secondary” conversion action. This meant it would show up in the “All Conversions” column for reporting purposes but would not be used by the Smart Bidding algorithm to optimize bids. This allowed for a side-by-side comparison of the 30-day vs. 7-day data without affecting campaign performance. Step 2: Observation and Monitoring For two weeks, the team monitored the delta between the two conversion actions. Because the average conversion lag was 2.2 days, the hypothesis was that the 7-day window would capture the vast majority of “real” intent. If the numbers stayed relatively close, it would confirm that the 30-day window was mostly capturing “noise” or “accidental” credit from long-past clicks. Step 3: The Primary Switch On January 12, 2026, the team officially made the 7-day conversion action the “primary” action for optimization. At this point, the 30-day version was moved to “secondary.” This forced Google’s Smart Bidding (Target ROAS and Maximize Conversion Value) to optimize specifically for users who convert within a one-week timeframe. Analyzing the Results: In-Platform Metrics After the switch, the team compared the 30-day period following the change to the previous period. It is important to note that the previous period included the peak holiday shopping season, which usually sets a very high bar for performance. Despite following a seasonal peak, the in-platform metrics improved significantly: Cost: Decreased by 6.3%. The campaigns became more efficient, spending less to achieve better results. Conversions: Increased by 42.9%. By narrowing the window, the algorithm focused on higher-intent users. Conversion Value: Increased by 52.1%. Not only were there more sales, but they were of higher value. ROAS: Increased by 62.3%. The efficiency of every dollar spent saw a massive jump. While these in-platform numbers were impressive, platform-reported data can sometimes be misleading. To find the “truth,” the team looked toward Shopify sales data and Marketing Mix Modeling (MMM). Beyond the Dashboard: Real Business Impact The ultimate goal of any attribution change is to drive actual business growth, not just prettier charts in Google Ads. By checking the Shopify

Uncategorized

Why customer personas help you win earlier in AI search

The Evolution of Search: From Keywords to Conversations For decades, search engine optimization was a game of matching keywords. If a user typed “CRM software” into Google, the goal was to have a page that optimized for that specific phrase. However, the rise of Artificial Intelligence and Large Language Models (LLMs) has fundamentally altered the search landscape. We are moving away from a world of disjointed keyword queries and into an era of sophisticated, AI-driven discovery. In this new environment, buyers don’t just search; they consult. They engage in multi-turn dialogues with AI assistants like ChatGPT, Claude, and Google’s Gemini. They provide context, describe their specific frustrations, and expect the AI to act as an expert advisor. This shift creates a massive challenge for traditional content marketing. If your content is generic, it becomes invisible to the AI models that now gatekeep information. To win in AI search, brands must embrace a more precise approach to content creation. This is where the intersection of the “They Ask, You Answer” (TAYA) framework and detailed buyer personas becomes a competitive necessity. By understanding exactly who is asking the question and what specific problem they are trying to solve, you can create content that AI models find authoritative, relevant, and worthy of recommendation. The Problem with the Generic Content Trap The “They Ask, You Answer” framework, pioneered by Marcus Sheridan, is built on a simple premise: your customers have questions, and your job is to answer them honestly and thoroughly. It is one of the most effective content strategies ever devised. However, many marketing teams fail in the execution because they fall into the “generic question trap.” When teams brainstorm content ideas, they often start with broad, high-volume educational topics. They ask themselves, “What is our product?” and “What category do we live in?” This leads to articles like “What is CRM software?” or “The Benefits of Marketing Automation.” While these topics are factually relevant, they are also incredibly crowded and, more importantly, disconnected from the reality of a modern buyer’s journey. Generic questions produce generic content. If you write a 1,000-word article on “What is a Warehouse Management System,” you are competing with every other company in that space, as well as Wikipedia and dictionary sites. More importantly, real buyers rarely ask these academic questions when they are actually ready to solve a problem. A buyer isn’t looking for a definition; they are looking for a way out of a specific mess. Real buyers ask questions that are anchored in their unique situation. They include variables like company size, industry-specific hurdles, and internal team dynamics. When your content ignores these nuances, it fails to provide the “signal” that AI search engines need to match your solution to a specific user’s query. To win, you must stop writing for the masses and start writing for the person. Why Context is the New Currency in AI-Driven Discovery AI search behavior is significantly different from traditional search behavior because it allows for—and encourages—extreme detail. In the past, a user might have been afraid to type a 50-word sentence into Google for fear of confusing the algorithm. Today, users regularly feed AI assistants paragraphs of context to get a better answer. Consider the difference between these two queries: Traditional Query: “Best sales CRM 2025” AI-Driven Query: “I run a 15-person marketing team in a B2B SaaS company, and we’re struggling to track leads properly because our current system is too manual. We need something that integrates with Slack and doesn’t require a full-time admin. What should we do?” The second query is a consultation. The AI doesn’t just look for the keyword “CRM.” It parses the entire scenario. It looks for solutions that fit a “15-person team,” “B2B SaaS,” “Slack integration,” and “low administrative overhead.” If your content explains exactly why a specific persona (like a Marketing Director at a mid-sized SaaS firm) experiences lead-tracking failures and how to fix them, you have a much higher chance of being the primary source the AI uses to formulate its response. This puts your brand into the consideration set much earlier in the buyer’s journey—often before they have even looked at a specific product list. Case Study: The Conversational Journey of Marcus in Birmingham To understand how this works in a real-world scenario, let’s look at a lifestyle example that mirrors the B2B buying process. Imagine a man named Marcus. He is 50 years old and is planning a reunion with old friends in Birmingham, UK. Instead of searching for “bars in Birmingham,” Marcus starts with a broad, contextual prompt to an AI assistant: “I’m looking for some ideas of things to do with friends in Birmingham on the weekend. I’m 50, and I have several male friends coming down to get together for a day. There will be some beers, no doubt, but we need some activities as well.” The AI processes several “persona” data points: age (50), gender (male group), location (Birmingham), and preference (beers + activities). It suggests several options: high-end gastropubs, an F1 gaming arcade, and a canal tour. Marcus responds to the gaming idea: “Ah, we all like games. What about gaming arcades? What gaming arcades could you recommend?” The AI then narrows the field. It suggests a pinball arcade in the Digbeth area. Marcus follows up again: “Pinball Factory in Digbeth sounds fun. What else is there to do around there, food and drinks-wise?” In this conversation, the “Pinball Factory” won because the AI knew it was a fit for a specific persona looking for a specific type of fun. The venue didn’t just show up because it was a “business in Birmingham”; it showed up because the context of the user’s life matched the context of the venue’s offering. If that venue had content on its site specifically about “Why Digbeth is the perfect reunion spot for older gamers,” it would have solidified that recommendation even further. Being part of the early conversation allows you to shape the user’s criteria.

Uncategorized

Starting Or Steering The Wave

The Evolution of Search: Moving Beyond Utility For over a decade, the playbook for search engine optimization was relatively straightforward. Marketers identified keywords with high search volume, analyzed the competition, and created “utility content” designed to answer specific questions or fulfill immediate needs. This approach, often referred to as utility SEO, focused on being the most helpful resource for people already looking for a solution. It was a reactive strategy—chasing the demand that already existed in the market. However, the landscape of digital marketing and search behavior is undergoing a seismic shift. The rise of Generative AI, the integration of AI Overviews in search results, and the increasing sophistication of user intent mean that simply answering a “what is” or “how to” question is no longer enough to maintain a competitive edge. The value of traditional utility content is depreciating. To survive and thrive in the modern era, marketers must decide whether they are content to simply steer the existing wave of search traffic or if they have the courage to start the wave themselves. The Decline of Utility SEO Utility SEO is built on the premise of providing factual, straightforward information. Think of articles titled “What is a CRM?” or “How to bake a sourdough starter.” While this content once drove massive amounts of traffic, its effectiveness is being eroded by two primary forces: AI-driven answers and content saturation. With the advent of Large Language Models (LLMs) and tools like Perplexity, ChatGPT, and Google’s own AI Overviews, the need for a user to click through to a website to get a basic definition is disappearing. If a search engine can provide a concise, accurate answer directly on the results page, the “utility” of a third-party blog post vanishes. This leads to the “zero-click” phenomenon, where search volume might remain high, but actual organic click-through rates (CTR) plummet. Furthermore, the barrier to entry for creating utility content has dropped to near zero. Anyone with an AI prompt can generate a 1,000-word guide on a common topic. This has led to a glut of “average” content that offers no unique perspective, making it increasingly difficult for brands to stand out or build meaningful authority through traditional keyword targeting alone. Steering the Wave: Optimizing for Existing Demand Steering the wave represents the traditional, yet evolved, SEO approach. It involves identifying established trends, high-volume keywords, and existing consumer needs, and then positioning your brand to capture that traffic. While more difficult than it used to be, steering the wave is still a vital part of a balanced digital strategy. The key to successfully steering the wave today is not just about matching keywords, but about providing superior depth and user experience. When you steer a wave, you are competing in a crowded space. To win, your content must be better than the AI summary. It needs to include original research, expert quotes, interactive elements, or proprietary data that an LLM cannot easily scrape and synthesize into a single paragraph. Steering the wave requires a high degree of technical SEO precision. You must ensure your site architecture is flawless, your Core Web Vitals are optimized, and your E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness) signals are undeniable. In a world where search engines are pickier than ever, being a “fast follower” on a trend requires excellence in execution. The Risks of Only Steering The primary risk of a strategy focused solely on steering existing waves is commoditization. When you only chase existing search volume, you are essentially a price-taker in the marketplace of ideas. You are waiting for others to define the conversation and then trying to insert yourself into it. This often leads to a “race to the bottom” where brands compete on incremental improvements rather than transformative value. Starting the Wave: Creating Demand Through Innovation Starting the wave is a more ambitious and ultimately more rewarding strategy. Instead of looking at keyword tools to see what people are already searching for, marketers who start waves look at the market to see what people *should* be thinking about. This is the essence of demand generation versus demand capture. When you start a wave, you are introducing new concepts, coining new terminology, and identifying problems that consumers didn’t even know they had. You aren’t just optimizing for a keyword; you are creating the keyword. If successful, you become the definitive source for that topic, and every other competitor who enters the space later is merely steering the wave you created. The Power of Brand Authority Starting a wave is deeply tied to brand building. When a company like HubSpot pioneered the term “Inbound Marketing,” there was no search volume for that phrase. By creating the category, they ensured that for years, they were the undisputed leaders of the conversation. They didn’t wait for the wave; they built the ocean. In the age of AI, starting the wave is a defensive moat. AI models are trained on existing data. They are excellent at summarizing what has already been said, but they are poor at inventing new frameworks or predicting the next major shift in industry philosophy. By producing truly original thought leadership, you provide the “training data” for the future, ensuring your brand remains relevant as the primary source of truth. The Synergy Between Starting and Steering The most successful modern marketing departments do not choose one over the other; they balance both. They use “Starting the Wave” strategies to build long-term brand equity and “Steering the Wave” strategies to capture immediate conversions and maintain a baseline of traffic. For example, a tech company might start a wave by publishing a white paper on a new, revolutionary way to manage remote teams (Demand Generation). Once that concept gains traction and people start searching for terms related to that new methodology, the company uses SEO best practices to steer that new traffic back to their product pages (Demand Capture). The Content Lifecycle 1. **Creation (Starting):** You publish an opinionated, data-backed piece that challenges the status quo.

Uncategorized

Google Says Hundreds Of Their Crawlers Are Not Documented via @sejournal, @martinibuster

The Hidden Architecture of the Web: Unveiling Google’s Secret Crawlers For years, the Search Engine Optimization (SEO) community has operated under a set of established rules regarding how Google interacts with websites. We optimize for Googlebot, we monitor our server logs for known user-agent strings, and we carefully craft our robots.txt files to guide the path of the world’s most famous web crawler. However, a recent revelation from Google’s Gary Illyes has sent ripples through the technical SEO world. It turns out that the Googlebot we know is just the tip of a very large, mostly hidden iceberg. According to Illyes, Google employs hundreds of different crawlers that are not publicly documented. While the SEO industry is familiar with the primary agents used for indexing and ads, this vast fleet of undocumented crawlers operates behind the scenes, performing tasks that remain largely mysterious to the public. This admission raises significant questions about how we manage server resources, how we identify “good” versus “bad” bots, and how Google’s internal infrastructure has evolved to meet the demands of a modern, AI-driven internet. Beyond Googlebot: A Diverse Ecosystem of Automated Agents To understand the significance of Illyes’ statement, we must first look at what we already know. Google provides a public list of “common” crawlers. These include the primary Googlebot (which comes in Desktop and Mobile versions), Googlebot-Image, Googlebot-Video, and Googlebot-News. Beyond search, there are utility crawlers like AdsBot-Google, which checks landing page quality for advertisers, and Feedfetcher, which retrieves RSS feeds for Google News and PubSubHubbub. These documented crawlers are well-behaved. They respect robots.txt directives, follow a predictable pattern of behavior, and identify themselves clearly via their User-Agent strings. SEOs rely on this documentation to troubleshoot indexing issues and ensure that their sites are being crawled efficiently. But as it turns out, these documented agents represent only a fraction of the total automated traffic Google sends to the web. The existence of “hundreds” of undocumented crawlers suggests a level of complexity in Google’s operations that far exceeds the standard indexing and ranking cycle. These crawlers likely serve highly specialized roles—from internal data validation and experimental testing to the massive data-gathering efforts required to train Large Language Models (LLMs) like Gemini. Why Does Google Use Undocumented Crawlers? The immediate question most webmasters ask is: why keep these crawlers secret? The answer likely lies in the balance between transparency and operational agility. Google is a massive organization with thousands of engineers working on disparate projects. Not every project requires a permanent, documented crawler that will be active for years to come. Many of these undocumented agents are likely “transient” crawlers. They might be deployed for a specific research project, a temporary data collection effort for a new feature, or to stress-test how a certain type of web architecture handles requests. By not documenting every single one of these, Google avoids cluttering its official documentation with agents that might only exist for a few weeks or months. It also prevents webmasters from creating overly specific robots.txt rules that might break Google’s internal experimental tools. Furthermore, documentation creates a maintenance burden. Every time a crawler’s behavior or name changes, Google would need to update public-facing guides in dozens of languages. In a fast-moving tech environment, the friction of maintaining an exhaustive list of hundreds of niche bots likely outweighs the benefit of total transparency. The Impact on Server Logs and Technical SEO From a technical SEO perspective, the presence of hundreds of undocumented bots creates a challenge for server log analysis. Log analysis is the practice of examining the records of every request made to a web server to understand how search engines are interacting with a site. When an SEO sees an unknown bot making hundreds of requests, the natural reaction is often to block it to save server resources or prevent potential scraping. If these “unknown” bots are actually Google-owned agents, blocking them could have unintended consequences. While Illyes noted that these crawlers often do not impact search indexing directly, they might be involved in other Google services that a business relies on. For instance, a bot might be verifying structured data for a specific rich result or checking for security vulnerabilities that could land a site on a “Safe Browsing” blacklist. The lack of documentation makes it difficult for system administrators to distinguish between a legitimate Google service and a malicious bot masquerading as a search engine. This is a practice known as “spoofing,” where bad actors use a Google-like User-Agent string to bypass security filters. Without a definitive list of “good” bots, the job of a security professional becomes significantly harder. The Identification Dilemma: User-Agents vs. Reverse DNS If we cannot rely on a public list of documented User-Agents, how are we supposed to identify these mystery crawlers? Google’s standard advice has always been to use reverse DNS lookups. Even if a crawler’s name is unfamiliar, if its IP address resolves to a googlebot.com or google.com domain, it is a legitimate agent from Google. However, running a reverse DNS lookup for every single request hitting a server is computationally expensive and can slow down server performance. Many modern firewalls and Web Application Firewalls (WAFs) rely on pre-compiled lists of IP ranges or known User-Agents to make split-second decisions on whether to allow or block traffic. When Google deploys hundreds of agents that aren’t on these lists, it increases the risk of “false positives,” where legitimate Google traffic is accidentally throttled or blocked. Illyes’ comments highlight a shift in how we must view bot management. We can no longer assume that anything not on the “official list” is a threat. Instead, we must look at the source of the traffic and the behavior of the agent. Legitimate Google crawlers, documented or not, typically follow the rules of the internet: they don’t try to brute-force login pages, they don’t ignore “429 Too Many Requests” headers, and they generally identify as coming from Google-owned infrastructure. The Role of Crawlers in the Age of AI

Uncategorized

YouTube tests sticky banner after ad skip

The Changing Landscape of YouTube Monetization For over a decade, the “Skip Ad” button has been the most popular feature on YouTube for millions of viewers worldwide. It represents a moment of relief—a way for users to bypass promotional content and get straight to the entertainment or information they were seeking. For advertisers, however, that button has often represented a “hard stop” to their message, leading to lost impressions and fragmented brand stories. However, the dynamics of digital video advertising are shifting once again as YouTube begins testing a new “sticky banner” format that persists even after the user hits the skip button. This experimental feature signals a significant pivot in how Google manages the relationship between viewers, creators, and brands. By introducing a branded card that remains visible after the video ad has been dismissed, YouTube is effectively blurring the lines between skippable and non-skippable inventory. This move aims to maximize the value of every ad placement, ensuring that a brand’s presence remains on the screen even when the user has expressed a desire to move on. What is the YouTube Sticky Banner? The “sticky banner” is a post-skip overlay designed to maintain brand visibility. Traditionally, when a user clicks “Skip Ad” on a TrueView in-stream ad, the video ad disappears entirely, and the primary content begins playing immediately. The interaction is binary: the ad is either on the screen or it is gone. Under this new test, first identified by Anthony Higman, Founder and CEO of Adsquire, the behavior of the skip action is being modified. When a viewer clicks the skip button, the video portion of the advertisement stops as expected. However, a persistent, branded banner—essentially a digital “sticky note”—remains anchored to the video player. This banner typically contains the advertiser’s logo, a brief call to action, or a product image. It stays visible over the bottom or side of the main video content until the viewer manually dismisses it by clicking a small “X” or “close” icon. This persistent element ensures that the advertiser’s message isn’t completely erased from the viewer’s consciousness. Even as the user watches their intended video, the brand remains in their peripheral vision, creating a lingering impression that was previously impossible with standard skippable formats. The Strategy Behind the Move: Why Now? YouTube’s decision to test this format doesn’t happen in a vacuum. It is a calculated response to several converging trends in the digital advertising industry. As the platform matures, Google is under constant pressure to increase revenue and provide better results for its advertising partners, who are increasingly demanding more “viewable” time for their investments. Combating Ad Fatigue and the “Skip Reflex” Most YouTube users have developed a “skip reflex.” As soon as a countdown timer appears, their cursor or thumb hovers over the area where the skip button will manifest. This behavior means that many 30-second ads are only seen for exactly five seconds. While YouTube does not charge advertisers for “skipped” views under certain bidding models, the missed opportunity for brand building is significant. The sticky banner attempts to capitalize on the time *after* the skip. By remaining on the screen while the user is actually focused on the content they want to see, the banner gains “active” attention time that the video ad might have lacked. It’s a way for brands to get a second chance at making an impression without being as intrusive as a non-skippable 30-second ad. Increasing Brand Recall Without Friction One of the primary goals of any advertising campaign is brand recall—the ability of a consumer to remember a brand after seeing an ad. Research consistently shows that longer exposure times lead to higher recall rates. By adding a sticky banner, YouTube is essentially extending the duration of the ad’s visual presence. Even if the audio and motion have stopped, the visual anchor remains, reinforcing the brand identity in the viewer’s mind. The Technical and Visual Implementation From a user interface (UI) perspective, the sticky banner is designed to be noticeable but not entirely disruptive. It typically occupies a small portion of the player window. On desktop environments, it may appear as a companion banner that has been “pulled” into the player frame. On mobile devices, it often sits at the bottom of the video, near the playback controls. The banner is tied directly to the original ad’s creative assets. Advertisers don’t necessarily need to create entirely new assets for this; instead, the system can pull existing images and text from the campaign’s “Call to Action” extensions or “Companion Banners.” This makes it an easy feature for Google to scale across millions of existing campaigns if the test proves successful. The “Manual Dismissal” Requirement Crucially, the sticky banner does not go away on its own. It requires an active choice from the user to close it. This introduces a new metric for YouTube to track: “active dismissal.” If a user leaves the banner up for several minutes while watching a video, it suggests a high level of passive brand exposure. If the user closes it immediately, it provides data on ad sentiment and intrusiveness. How This Changes the Game for Advertisers For digital marketers and brands, the sticky banner represents a fundamental shift in the value of skippable inventory. For years, the “skip” was seen as a failure of the creative to capture the audience’s attention. Now, the skip might simply be the transition to a different phase of the ad experience. Redefining “Viewability” Metrics In the world of SEO and digital marketing, viewability is a key KPI (Key Performance Indicator). Standard industry definitions usually count an ad as “viewable” if a certain percentage of its pixels are on screen for a specific amount of time. The sticky banner complicates this. If the video is skipped but the banner remains for two minutes, does that count as a “long-duration” impression? Advertisers will likely see new reporting metrics in the Google Ads dashboard that account for this post-skip visibility, potentially leading to a

Uncategorized

Google adds video visibility to Performance Max reporting

Introduction to Enhanced Transparency in Performance Max Performance Max (PMax) has been a cornerstone of the Google Ads ecosystem since its full rollout, representing a significant shift toward AI-driven automation. By consolidating Search, Display, YouTube, Discover, Gmail, and Maps into a single campaign type, Google promised advertisers greater reach and better conversions through machine learning. However, this convenience often came at the cost of transparency. For years, digital marketers have referred to PMax as a “black box,” where inputs go in and results come out, but the specific mechanics of which assets drove which results remained obscured. Google is now taking a meaningful step toward lifting that veil. In a recent update to the platform’s reporting capabilities, Google Ads has introduced a new “Ads using video” segment within Performance Max channel performance reporting. This feature allows advertisers to dissect their data with a specific focus on video assets, providing a clearer picture of how these creative elements influence overall campaign success. This update is more than just a minor UI tweak; it is a response to the growing demand from performance marketers for more granular control and insight within automated environments. As video content continues to dominate the digital landscape—fueled by the explosive growth of YouTube Shorts and vertical video consumption—understanding the ROI of video production has never been more critical. Understanding the “Ads Using Video” Segment The core of this update lies in the ability to segment performance data based on the presence of video assets. When navigating the channel performance reports within a Performance Max campaign, advertisers can now apply a filter or segment specifically for “Ads using video.” This allows for a side-by-side comparison of placements that utilized video assets versus those that relied solely on text and images. In the past, an advertiser might see that their campaign performed well on YouTube, but they couldn’t easily distinguish if that performance was driven by the high-quality video they uploaded or if the algorithm was simply serving static image-based ads on the YouTube masthead or in-feed placements. With the new segment, the data is broken down to show exactly how much of the traffic, spend, and conversion volume is attributed to video-centric ad units. This level of detail is essential for verifying the impact of creative investments. Producing high-quality video is often the most expensive and time-consuming part of a creative strategy. Advertisers need to know if that investment is yielding a lower Cost Per Acquisition (CPA) or a higher Return on Ad Spend (ROAS) compared to cheaper, static alternatives. The Evolution of Performance Max Reporting To understand why this update is significant, one must look at the history of Performance Max. When it was first introduced, PMax offered very little in the way of reporting. Marketers could see overall campaign performance but had little insight into which “channels” (Search vs. YouTube vs. Display) were doing the heavy lifting. Over time, Google introduced the “Placement Report” and “Search Terms Insights,” but creative reporting remained a significant pain point. The “Ads using video” segment is part of a broader trend toward incremental improvement in metric visibility. It bridges the gap between the fully automated “trust the algorithm” approach and the data-driven “verify the results” approach preferred by sophisticated media buyers. By allowing advertisers to see how video assets perform across Google’s automated inventory, the platform is finally providing a way to validate the “Creative Excellence” scores that Google often promotes. This update was first brought to light by Hana Kobzova, the founder of PPC News Feed, who noted that this segmenting capability is appearing in the channel performance sections of accounts. As Google continues to integrate its Gemini AI models into the ad creation process, this reporting will become even more vital for distinguishing between human-made creative and AI-generated assets. Why Video Visibility Matters for Modern Marketers Video is no longer an optional component of a digital marketing strategy; it is a primary driver of engagement. However, not all video is created equal. In the context of Performance Max, Google often uses “automatically created videos” if an advertiser fails to provide their own. These auto-generated videos are often simple slideshows of the images and text provided in the asset group, and their performance can vary wildly. With the new reporting visibility, advertisers can now answer several critical questions: Is Professional Video Outperforming Auto-Generated Content? Many advertisers worry that Google’s auto-generated videos may actually hurt brand perception or conversion rates. By segmenting results, a brand can compare an asset group that includes professional video against one that relies on Google’s automated tools. If the professional video shows a significantly higher conversion rate or better engagement metrics, it provides a data-backed reason to increase the video production budget. How Does Video Impact the Customer Journey? Video often serves a different purpose than Search ads. While Search is high-intent and bottom-of-funnel, video (especially on YouTube and Discover) often serves to build awareness and demand. The “Ads using video” segment helps marketers understand if video assets are driving assisted conversions or if they are effectively closing sales in a way that static Display ads are not. Optimizing for the Right Placements Performance Max spreads ads across a massive network. Video assets are primarily served on YouTube and the Google Display Network. By seeing the performance of “Ads using video,” marketers can infer how well their creative is resonating on these specific platforms. If video performance is lagging, it might indicate that the creative is too long, not optimized for vertical viewing (Shorts), or failing to capture attention in the first three seconds. Integrating Video Reporting into Your Strategy To make the most of this new visibility, advertisers should revisit their Performance Max structure. Instead of simply looking at the new reporting segment in isolation, it should be used to inform a broader testing framework. Consider running an A/B test by creating two different Asset Groups within the same Performance Max campaign. Asset Group A could contain only high-quality static images and text,

Uncategorized

Google says AI Mode stays ad-free for Personal Intelligence users

The Evolution of Search: Google AI Mode and Personal Intelligence The landscape of digital search is undergoing its most significant transformation since the invention of the PageRank algorithm. As artificial intelligence becomes the primary interface through which users interact with information, Google is pivoting its core product from a list of links to a comprehensive, conversational AI ecosystem. At the center of this evolution is Gemini and its specialized “AI Mode,” a feature designed to provide direct answers and perform tasks on behalf of the user. Recently, Google reached a major milestone by expanding “Personal Intelligence” within AI Mode to all users in the United States as a beta. This feature allows the Gemini AI to access a user’s personal data across the Google ecosystem—including Gmail, Google Drive, Google Photos, and YouTube—to provide highly tailored, context-aware responses. However, with the integration of personal data comes the inevitable question of monetization. In a recent clarification, Google confirmed that while it is actively testing advertisements within AI Mode, those who opt into the Personal Intelligence experience will remain ad-free for the time being. This decision highlights a delicate balancing act for the tech giant: the need to monetize expensive generative AI features versus the necessity of maintaining user trust when handling sensitive personal information. As Google navigates this transition, the implications for users, advertisers, and the broader tech industry are profound. What is Personal Intelligence in AI Mode? To understand the significance of the “ad-free” promise, one must first understand what Google’s Personal Intelligence actually does. In the context of Gemini and Google’s AI-centric redesign, Personal Intelligence refers to the AI’s ability to “read” and “understand” the user’s specific digital footprint within Google’s own apps. When a user enables these connections, Gemini moves beyond being a general-purpose chatbot and becomes a personalized digital assistant. For example, a user could ask, “When does my flight to Chicago depart?” and Gemini would scan the user’s Gmail to find the confirmation email. A user might ask, “Show me photos of my dog from last summer,” and Gemini would pull the relevant files from Google Photos. This deep integration allows for a level of utility that generic AI models cannot match because they lack the specific context of the individual’s life. The expansion of this feature to the U.S. beta audience signifies Google’s commitment to making Gemini the central nervous system of its productivity and entertainment suite. By connecting Search, Workspace, and YouTube, Google is creating a closed-loop system where the AI knows the user’s preferences, schedule, and history, allowing for “proactive intelligence.” The Current State of Ads in Google AI Mode While Personal Intelligence users are currently enjoying an ad-free experience, the broader AI Mode is already being used as a laboratory for the future of digital advertising. For several months, Google has been testing the inclusion of sponsored content and business connections within Gemini’s responses for general queries in the United States. In these tests, if a user asks for advice on a topic that has commercial intent—such as “What are the best hiking boots for rainy weather?”—Google may include links to specific products or businesses alongside the AI-generated text. According to Google, the early feedback from these tests has been positive, with users reportedly finding these business connections “helpful” rather than intrusive. This suggests that the future of AI search will not look like the traditional “sidebar” or “top-of-page” ads we see in classic Search, but rather like integrated recommendations that feel like a natural part of the conversation. The goal for Google is to ensure that these ads open up new opportunities for discovery. However, the stakes are higher in AI Mode. In a traditional search engine, the distinction between an ad and an organic result is clear. In a conversational AI, where the bot is providing a singular, authoritative answer, an ad can feel more like a biased recommendation. This is likely why Google is proceeding with caution, especially when personal data is involved. Why Personal Intelligence is Ad-Free (For Now) The confirmation that Personal Intelligence users will not see ads is a strategic move by Google to encourage adoption. There are three primary reasons why Google is maintaining this “carveout” for its most advanced AI experience: 1. Establishing User Trust and Privacy Privacy is the biggest hurdle for any AI that requests access to personal emails and private photos. If users felt that their private correspondence in Gmail was being “scanned” to serve them targeted ads immediately within the AI chat interface, the backlash would be significant. By keeping the Personal Intelligence experience ad-free, Google provides a “safe space” for users to experiment with these integrations without feeling like their privacy is being directly commodified in real-time. 2. The Complexity of Contextual Targeting Targeting ads based on a general search query like “best laptops” is straightforward. Targeting ads based on a user’s private calendar or family photos is much more complex and fraught with ethical risks. Google is likely still refining the technology required to ensure that if ads are eventually introduced to this space, they are handled with extreme sensitivity and do not cross the line into being “creepy.” 3. Data Gathering and User Retention At this stage, Google prioritizes data and feedback over immediate ad revenue from this specific sub-segment. By offering a clean, ad-free, and highly useful personalized assistant, Google can secure a loyal user base that relies on Gemini for their daily tasks. Once Gemini becomes an indispensable part of the user’s workflow, Google will have more leverage to introduce monetization strategies later on. The Future Transition: Will Ads Eventually Arrive? While the current status is ad-free, Google has not promised that it will stay that way forever. In fact, a Google spokesperson explicitly stated that in the future, they anticipate ads will operate similarly for people who choose to connect their apps with AI Mode. The key phrase used was that ads would continue to be “relevant to things like your query, the context

Scroll to Top