Uncategorized

Uncategorized

Why your website is now the source of truth in local AI search

Open ChatGPT, Claude, or Google Gemini and search for a local business you know has a strong, established online presence. Ask the AI for a specific recommendation in that category—perhaps a law firm, a specialized plumber, or a boutique marketing agency. In many cases, the business will appear in the response. If you dig deeper and look at the citations or sources the AI provides, you will almost certainly see the business’s own website listed as a primary reference. This reveals a fundamental shift in the digital landscape: AI does not conjure answers out of thin air. Large Language Models (LLMs) and AI search engines are not creative engines in the sense of inventing facts; they are retrieval engines. They pull from the most credible, accessible, and comprehensive information they can find. If your website is not the most complete and authoritative source of information about your own business, the AI will be forced to assemble a narrative from digital scraps—third-party directories, outdated reviews, or even competitor mentions. When that happens, you lose control of your brand story. Many business owners and digital marketers are currently asking the same existential question: “Do I even need a website anymore? If AI answers every query directly in the search results, why does my own domain matter?” The answer is that your website has evolved. It is no longer just a digital brochure or a lead-generation tool; it is now a source document. AI systems treat it as the authoritative input for their knowledge graphs. The real question is no longer whether you need a website, but who gets to define your business: you or a fragmented collection of third-party sources. Zero-click doesn’t mean zero opportunity The rise of “zero-click” searches—where a user gets an answer directly on the search engine results page (SERP) without clicking through to a website—has many marketers feeling uneasy. They see impressions holding steady while click-through rates (CTR) dip, leading to the premature conclusion that websites are becoming obsolete. However, this is a misunderstanding of how search intent works in the age of AI. Fewer clicks do not equate to less importance. Instead, the nature of the click has changed. When we look at the data regarding where AI Overviews (AIOs) actually appear, a clear pattern emerges. Analysis of Ahrefs data covering over 46 million keywords shows that nearly 99% of keywords triggering an AI Overview are informational in nature. Navigational keywords, where a user is looking for a specific site, account for a mere 0.13%. What does this mean for your business? It means the traffic you are “losing” to AI was likely never high-intent, revenue-driving traffic to begin with. If someone wants a quick fact—like “what is the average cost of a roof repair”—they get it from the AI and move on. These were “top of the funnel” visits that rarely resulted in immediate conversions. However, commercial and transactional keywords only make up 12.5% and 3.5% of AI Overview triggers, respectively. (Note that these totals overlap as a single keyword can have multiple intents). The clicks that drive your bottom line—the ones tied to phone calls, service bookings, and consultations—still happen. These high-value queries occur further down the funnel after an AI has already made a recommendation. When a customer is ready to pull the trigger, they don’t just trust the AI blindly; they navigate to the website to validate the recommendation. Your website is the destination for the “validation phase.” AI recommends, your customer decides: Know the difference Imagine a homeowner asking an AI assistant, “Who is the most reliable emergency plumber in downtown Chicago?” The AI will likely surface three or four names. It does this by pattern-matching based on location signals, review sentiment, and the content it has indexed from various websites. At this stage, the AI is offering a starting point, not a final verdict. The AI is not the one signing the contract or handing over credit card information. For high-stakes local decisions—choosing a pediatrician, a criminal defense attorney, or a high-end contractor—consumers are not going to act solely on an algorithmic suggestion. The “human element” of decision-making requires a level of trust that an AI summary cannot provide on its own. After the AI provides its recommendation, the customer’s journey typically follows a predictable path: They search for the specific business name to find the official site. They read the most recent reviews to check for consistency. They look at photos of past work or the team to establish a visual connection. They visit the website to confirm the business offers the exact service they need at a price point they find acceptable. This validation phase is where the deal is closed. AI might get you a seat at the table, but your website is what wins the contract. AI is actually making your website more valuable It is a paradox of the modern web: the more AI dominates the search experience, the more valuable your original content becomes. AI systems are constantly “reading” your website to determine exactly what you do, who you serve, and why you are better than the competition. They are cross-referencing your site content with your Google Business Profile, local directory listings, and social media mentions to ensure your business is legitimate and consistent. When your website provides a clear, structured, and consistent narrative, the AI gains “confidence” in your business. High confidence leads to higher placement in AI-generated recommendations. Conversely, when your website is thin on details or contradicts your other listings, the AI’s confidence drops, and you get skipped in favor of a competitor with a clearer digital footprint. Your website is now effectively a source document for LLMs. If you don’t provide the data, the AI will fill in the blanks using whatever it can find elsewhere—perhaps a disgruntled Yelp review from five years ago or an outdated directory that lists your old office address. By maintaining a robust website, you ensure the AI pulls from the most accurate and flattering

Uncategorized

How to fix a suspended Google Merchant Center account

Finding a red banner at the top of your Google Merchant Center (GMC) account is a nightmare scenario for any e-commerce business. Unlike standard Google Ads suspensions, which might only stop a few search campaigns, a Merchant Center suspension is far more reaching. It immediately severs your connection to Google Shopping, Local Inventory Ads, product feeds within Performance Max, dynamic remarketing, and even your free organic product listings. For many retailers, this represents the loss of their highest-ROI marketing channel overnight. Google’s policies for Shopping are notoriously stricter than their general advertising guidelines because Google acts as a digital storefront. If they don’t trust your business, they won’t show your products to their users. To get back online, you need to understand the nuances of Google’s automated systems and manual review processes. This guide provides a comprehensive roadmap for identifying, fixing, and appealing a suspended Google Merchant Center account. Case Study: Reinstating a “Misrepresented” Retailer To understand how complex these suspensions can be, consider a recent case involving a UK-based e-commerce retailer. Their account was abruptly suspended for “Misrepresentation,” a vague term that often leaves merchants guessing. On the surface, the store was legitimate: they sold real products, had accurate pricing, and a functional website. However, Google’s automated systems flagged them as untrustworthy. A deep compliance audit revealed that the suspension wasn’t caused by one major violation, but rather a “totality of evidence” that suggested the business might not be professional. The issues included: The “Contact Us” page lacked a physical business address and used a generic Gmail address instead of a domain-based email (e.g., info@yourstore.com). Policy pages for shipping, returns, and payments were either missing specific details—like restocking fees and defective item procedures—or were non-existent. A technical bot-blocker intended to stop spam was inadvertently preventing Google’s automated crawlers from verifying product data. Shopify’s automatic shipping synchronization was creating data conflicts within Merchant Center, leading to inconsistent shipping costs between the feed and the checkout page. After creating a prioritized action list and correcting every single one of these “small” gaps, the client requested a review. Google approved the appeal, and the account was fully reinstated. The lesson here is clear: Google evaluates your entire ecosystem—not just a single product or page. Step 1: Identify the Type of Suspension Before you can fix the problem, you must accurately diagnose it. Google typically notifies you of a suspension via an email that cites a specific policy. You can also find detailed information within the Google Merchant Center interface by navigating to the “Needs Attention” tab. While Google’s descriptions can feel frustratingly vague, they fall into several primary categories. Understanding these categories is the first step toward a successful appeal. Misrepresentation This is the most frequent reason for suspension. Google uses this label when it cannot verify that your business is a legitimate, trustworthy entity. It covers everything from missing contact information and mismatched prices to poor third-party reviews. To fix this, you must focus on transparency across your Merchant Center settings, your product feed, your website, and your broader online reputation. Counterfeit Products This is particularly common for resellers of high-demand brands like Nike, Prada, or Pokémon. Google is highly sensitive to the sale of unauthorized goods. If you are flagged for this, you should clearly state your relationship with the manufacturer on your website. Are you an authorized reseller? Do you purchase directly from the brand? Detailing your authentication process and ensuring your prices aren’t “too good to be true” compared to the MSRP can help clear your name. Website Needs Improvement If Google issues this flag, it means your site looks “under construction” or unprofessional. This could be due to placeholder text (like “Lorem Ipsum”), broken links, or a checkout process that fails during testing. Use incognito mode on multiple devices to ensure every button and page works perfectly for a first-time visitor. Unsupported Shopping Content Google Shopping is for physical goods, not services. While you can use Google Ads to promote a consulting business or a law firm, you cannot list them in the Merchant Center. Issues often arise when services are bundled with products (e.g., selling tires but including the installation fee in the price). Ensure you separate physical goods from labor or digital services on your product pages. Healthcare and Medicines This is a heavily regulated category. Depending on your country, you may need third-party certification from organizations like LegitScript. Google explicitly bans certain pharmaceuticals and supplements, and if your product descriptions make unverified medical claims, you will likely face a suspension. Always include clear disclaimers and link to scientific studies where applicable. DMCA Violations If another entity files a Digital Millennium Copyright Act (DMCA) report against you, Google will act quickly. These reports are often listed in the Lumen database. If you are using copyrighted images or text without permission, you must remove them immediately. If the report is false, you will need to provide documented proof of originality during your appeal. Step 2: Audit Your Merchant Center Settings Errors in the backend of Merchant Center are often the “smoking gun” in suspension cases. You must ensure that every field is filled out and that the data perfectly mirrors what is on your website. Accurate Business Information Your store name must follow Google’s naming conventions—avoid promotional text like “Free Shipping Store” or excessive capitalization. Your physical address must be a real location that matches your website’s contact page. Google’s AI often cross-references these addresses with Google Maps; if they don’t match or the address doesn’t exist, it triggers a red flag. Shipping and Returns Consistency Discrepancies in shipping and returns are a major trigger for suspensions. Every product in your feed must be covered by a shipping rule. If your website says shipping takes 3-5 days, but your Merchant Center setting says 7-10 days, Google views this as a lack of transparency. Ensure that handling times, shipping costs, and return windows are identical across both platforms. Step 3: Audit Your Product Feed Data Quality

Uncategorized

Why log file analysis matters for AI crawlers and search visibility

Why log file analysis matters for AI crawlers and search visibility The landscape of digital discovery is undergoing a seismic shift. For decades, SEO professionals have relied on a predictable feedback loop: Google crawls a site, indexes the content, and provides performance data through Google Search Console. However, as Artificial Intelligence (AI) becomes the primary interface for how users find information, that feedback loop is breaking. We are entering an era of “black box” discovery where systems like ChatGPT, Claude, and Perplexity shape visibility through processes that are largely invisible to the average site owner. The challenge is clear: there is no “Google Search Console” for AI. When an LLM (Large Language Model) provides an answer based on your content, you often have no direct way to know when that content was accessed, how much of it was read, or if the bot encountered errors during the process. This lack of transparency creates a massive data gap. Without knowing how AI agents interact with your infrastructure, you cannot optimize for the very systems that are increasingly responsible for your brand’s authority and reach. Log file analysis has emerged as the essential bridge across this gap. It represents the raw, unfiltered truth of what happens on your server. By recording every request made by every crawler, log files provide the missing layer of data needed to understand AI search visibility in a world without traditional reporting tools. The Visibility Gap in the Age of AI Search In traditional SEO, behavior and performance are intrinsically linked. If you see a spike in impressions in Google Search Console, you can usually trace it back to increased crawl activity or improved indexing. You can see which URLs Googlebot prioritizes and identify where it struggles. This clarity allows for precise technical optimization. AI search platforms offer no such luxury. While platforms like ChatGPT and Perplexity are actively crawling the web to build datasets and power real-time retrieval-augmented generation (RAG), they do not provide a dashboard showing your “AI index coverage.” This creates a situation where your content might be influencing AI-generated answers, but you are left guessing about the mechanics behind it. This is particularly concerning because AI crawlers often consume content without sending traditional “click” traffic back to the source. If a user gets a complete answer from an AI agent, they may never visit your website. In this environment, visibility is the new currency, and log files are the only way to audit that currency. Emerging Sources of AI Visibility While the major AI players have been slow to provide transparency, we are starting to see the first signs of native reporting. Bing has taken a lead in this area by introducing Copilot-related insights within Bing Webmaster Tools. This report provides a glimpse into how AI-driven systems interact with websites, marking a significant first step toward a more transparent AI ecosystem. Alongside native tools, a new category of “AI SEO” platforms is emerging. Tools like Scrunch and Profound focus specifically on AI visibility, tracking how brand mentions appear in AI responses and monitoring how various agents interact with specific domains. Many of these platforms connect directly to infrastructure layers like Cloudflare, allowing them to monitor crawler activity without the need for manual log exports. However, even these tools have limitations. Most third-party platforms operate within a limited timeframe, often surfacing only recent agent activity. This makes them excellent for monitoring “hot” trends but less effective for long-term strategic planning. AI crawler activity is notoriously inconsistent; unlike Googlebot, which maintains a relatively steady presence, AI agents often crawl in sporadic bursts. To identify meaningful patterns, you need historical data that spans months, not just days. Log files provide this permanence. Decoding the Two Categories of AI Crawlers To analyze log files effectively, you must first understand that not all AI bots are created equal. In your server logs, these bots appear as “user agent strings.” While it is tempting to group them all as “AI,” they generally fall into two distinct categories: training crawlers and retrieval crawlers. Training Crawlers: The Builders of Knowledge Training crawlers are responsible for collecting the massive datasets used to build and refine LLMs. Common agents include GPTBot (OpenAI), ClaudeBot (Anthropic), CCBot (Common Crawl), and Google-Extended. These bots are the “librarians” of the AI world. Their behavior is typically broad and infrequent. They don’t crawl for real-time accuracy; they crawl to understand topics, language patterns, and facts. If these bots are missing from your logs, it suggests a foundational problem: your content may not be included in the datasets that shape how AI systems understand your industry. This can lead to your brand being ignored in favor of competitors whose data was successfully ingested during the training phase. Because training cycles happen periodically, these bots may appear in your logs for a week and then disappear for a month. This is why a short log retention window is dangerous—you might assume a bot is blocked when it simply hasn’t reached its next crawl cycle yet. Retrieval and Answer Crawlers: The Real-Time Agents Retrieval crawlers, such as ChatGPT-User and PerplexityBot, operate on a much tighter loop. These bots are often event-driven, triggered by specific user queries. When a user asks an AI a question that requires up-to-date information, the AI sends a retrieval agent to find the most relevant, current source. Their behavior is highly targeted. Instead of crawling your entire site, they may jump straight to a specific article or a single data point. In your log files, this looks like “surgical” activity. If retrieval bots consistently hit your high-level category pages but never reach your deep-dive technical guides, it indicates a discovery issue. The AI “knows” you have a category for the topic but cannot find the specific answers hidden deeper in your architecture. Traditional Bots vs. AI Bots: A Widening Gap Googlebot and Bingbot remain the gold standard for crawl behavior. They are efficient, follow established rules, and provide a baseline for “crawlability.” However, log file analysis

Uncategorized

Why your Google Ads results keep repeating the same outcomes

The Paradox of the “Well-Optimized” Google Ads Account For years, the playbook for paid search was straightforward. Success was the direct result of granular optimizations. Digital marketers spent their days adjusting manual bids, restructuring campaign hierarchies, refining match types, and aggressively mining search term reports to add negative keywords. If performance dipped, you turned a dial. If it soared, you leaned into the specific keyword responsible. The relationship between action and outcome was linear and transparent. Today, many account managers still operate under this legacy framework. When auditing these accounts, they often appear “well-optimized” on the surface. They feature active management logs, clean structures, and targets that align perfectly with achieved Return on Ad Spend (ROAS). On paper, the account is healthy. Yet, the business owners are frustrated because performance is stuck in a loop. Results keep repeating the same outcomes, and no matter how many “optimizations” are made, the needle doesn’t move toward actual growth. The reality is that Google Ads has undergone a fundamental architectural shift. The platform no longer responds to isolated, manual optimizations in the way it once did. Instead, it operates as a machine learning system that builds on the signals you provide over time. If your results are stagnant, it is likely because you are no longer optimizing the account—you are training the system to stay exactly where it is. When an advertiser says, “That change didn’t work,” what they usually mean is that their recent tweak wasn’t powerful enough to override months of prior training signals. Why isolated optimizations don’t move the needle anymore Modern Google Ads environments are dominated by black-box technologies: Smart Bidding, Performance Max (PMax), Broad Match expansion, and modeled conversions. These are not tools that reset every time you make a change. They are cumulative learners. They function more like an athlete being coached than a machine being programmed. When you raise a ROAS target this week, that single action does not exist in a vacuum. It must compete with six months of reinforced signals that told the system what a “good” conversion looks like. If you launch a new experimental campaign but shut it down after only 10 days because the CPA was too high, the system doesn’t simply forget that campaign. It learns that volatility is punished, and it becomes more hesitant to explore new auctions in the future. It interprets your quick “pause” as a command to avoid uncertainty. Google’s AI continuously optimizes toward the behaviors that survive. It favors the campaigns that get funded, the keywords that consistently hit targets, and the strategies that avoid being paused. Consequently, if your account has plateaued despite what looks like “strong management,” it is rarely because your bids are slightly off. It is because you have trained the system to avoid the very uncertainty where growth lives. You have taught Google that safe, predictable demand is your only priority. What training looks like in a Google Ads account To fix a repeating cycle of outcomes, you must understand how Google Ads answers the fundamental question: “What does success look like for this advertiser?” The system does not read your mind; it infers your goals from a series of technical and behavioral signals. Specifically, it looks at: Conversion Inclusion: Which specific actions are you telling the system to optimize for? Are they high-value purchases or soft leads? Value Assignment: How much are those conversions worth to you? Are you providing static values or real-time profit data? Budget Protection: Which campaigns do you leave untouched during a market dip, and which ones do you cut immediately? Reaction Time: How quickly do you react to performance swings? Frequent, reactionary changes signal to the AI that stability is the only acceptable state. Over months, these signals shape the system’s behavior in the auction. It dictates which queries the system expands into via Broad Match, which audience segments it prioritizes in Performance Max, and how aggressively it competes for top-of-page placement. Training is about the direction you reinforce over the long haul. If repeat customers hit your ROAS target easily while prospecting campaigns fluctuate, the system will naturally migrate your budget toward those repeat customers. It is the path of least resistance for the algorithm. Consider a common pattern in mature accounts: In Month 1, non-brand (prospecting) search drives 52% of revenue. By Month 6, non-brand revenue has dropped to 36%, but the total account ROAS has actually improved. On the surface, the manager looks like a hero. In reality, the system has learned that predictable revenue (usually from branded search or remarketing) is more important than incremental growth. The account is “improving” itself into a corner where it only talks to people who already know the brand. How you might be training Google Ads wrong The most dangerous mistakes in modern PPC management are subtle. They are often framed as “best practices” or “responsible management,” which makes them incredibly difficult to identify without a shift in perspective. Here are the three primary ways advertisers accidentally train their accounts for stagnation. Mistake 1: Training on the easiest revenue Branded search and returning customers are the “low-hanging fruit” of digital marketing. They convert at high rates, carry low CPAs, and make your dashboard look incredible during promotional periods. Naturally, many advertisers lean into these areas, scaling budgets behind what is already working and protecting those “efficient” dollars. However, over time, this teaches Google that predictable revenue is the only path to success. When the system sees that you are willing to spend more on branded terms while starving non-brand terms of budget, it stops trying to find new customers. It concludes that your business model is built on recycling existing demand. Look at this data as an example of the “Safety Trap”: Month Branded Cost % Account ROAS 1 33% $5.44 2 35% $5.03 3 40% $6.10 4 38% $6.69 5 42% $7.06 6 46% $7.39 In this scenario, the account’s total ROAS improved significantly over six months. Most stakeholders would be thrilled.

Uncategorized

March 2026 Google core update more volatile than December — here’s what changed

Understanding the Magnitude of the March 2026 Core Update The search engine optimization landscape experienced a seismic shift in early 2026. While Google releases several core updates every year, the March 2026 core update has proven to be one of the most disruptive in recent memory. Following closely on the heels of the December 2025 update, the March rollout demonstrated a level of ranking volatility that caught many digital marketers and site owners off guard. Data indicates that this update was not merely a refinement of existing signals but a significant recalibration of how Google defines authority and value. By analyzing exclusive data and expert insights, it becomes clear that the “bar” for ranking in the top positions has been raised, favoring primary sources and established brands over the intermediary sites that have dominated the search engine results pages (SERPs) for years. Breaking Down the Volatility: March 2026 vs. December 2025 To understand the impact of the March 2026 update, it is essential to compare it to its predecessor. The December 2025 core update was considered significant at the time, yet the March data reveals a much higher degree of churn across all ranking tiers. According to data provided by SE Ranking, the stability of the SERPs reached a new low during this period. The Top 3 Tectonic Shift In the most competitive tier of search—the top three organic results—the volatility was staggering. Approximately 79.5% of URLs in these positions changed, compared to 66.8% during the December update. This means that nearly eight out of ten listings at the very top of Google were swapped out or reshuffled. For businesses that rely on these “money positions” for traffic, the update represented a high-stakes environment where long-held rankings were no longer guaranteed. The Top 10 and Top 100 Exodus The disruption extended throughout the first page and beyond. In the top 10 results, 90.7% of URLs experienced a shift in position, an increase from the 83.1% seen in December. Perhaps more alarming for SEO professionals is the rate at which pages disappeared entirely. Roughly 24.1% of pages that previously held a top 10 ranking fell out of the top 100 results altogether. In contrast, only 14.7% of top 10 pages saw a similar drop-off during the December update. This suggests that the March 2026 update was more punitive toward sites that failed to meet Google’s evolving quality standards. The Complication of Overlapping Updates Analyzing the specific cause of these shifts is made more complex by the timing of Google’s rollout schedule. The March 2026 core update began its rollout exactly one day after the March 2026 spam update had finished its course. This overlapping sequence makes it difficult for analysts to attribute specific ranking drops to a single factor. However, industry consensus and historical patterns suggest that while the spam update likely removed lower-quality or manipulative content from the index, the core update was responsible for the broader re-evaluation of site authority. The spam update essentially “cleared the deck,” allowing the core update’s new ranking logic to take hold with amplified intensity. This cumulative effect is likely why the March volatility numbers were so much higher than those in December. The Rise of Destination Sources and the Fall of Intermediaries Independent analysis conducted by SEO expert Aleyda Solis, utilizing Sistrix data from late March through mid-April, provides a clearer picture of the “intent” behind these shifts. The data reveals a consistent trend: Google is moving visibility away from intermediary sites and toward “destination” sources. An intermediary site is one that acts as a middleman—aggregators, directories, and comparison platforms that curate information or listings from other sources. A destination source, conversely, is the primary entity, the official organization, or the specialist who owns the data or provides the service directly. This shift aligns with Google’s long-term goal of reducing “search friction,” sending users directly to the source rather than through a series of third-party portals. Who Gained Visibility? The winners of the March 2026 core update generally fall into four distinct categories: Official and Institutional Domains: Government websites (.gov) and recognized institutional bodies saw significant gains, particularly for queries involving data, facts, and public records. Specialist and Niche Experts: Sites that focus deeply on a single topic rather than a broad range of subjects were rewarded for their topical authority. Established Brands: Well-known entities with high brand recognition and direct consumer trust performed better than lesser-known competitors. Dominant Platforms: Large-scale platforms that host massive amounts of original content or user-generated data also saw visibility increases. Who Lost Visibility? The biggest losers in this update were the sites that traditionally sat between the user and the final destination. This includes: Aggregators: Sites that pull listings from various sources without adding significant unique value. Directories: Generalized business or service directories that offer little more than basic contact information or links. Comparison-Driven Sites: Affiliate-heavy platforms that focus on comparing products or services, especially those that lack original, hands-on testing or unique insights. Vertical-Specific Impact: Winners and Losers by Industry The March 2026 update did not affect all industries equally. By examining specific sectors, we can see how Google’s preference for destination sources manifested in real-world search results. Jobs and Employment The recruitment sector saw one of the most dramatic shifts. Major job aggregators like ZipRecruiter and Glassdoor, which often dominate search for job-related queries, lost ground. In their place, Google elevated direct employer sites. For example, queries for corporate roles began showing internal portals like Amazon.jobs or specialized government platforms like USAJobs more frequently. This suggests that if a user is looking for a job, Google prefers to send them directly to the company hiring rather than a third-party job board. Health and Medical Information Health search results underwent a significant re-sorting. Broad consumer health blogs and lifestyle sites that provide general medical advice saw visibility declines. Meanwhile, clinical, research-driven, and specialist sources—such as academic journals and specialized medical institutions—gained visibility. This is a continuation of Google’s focus on E-E-A-T (Experience, Expertise, Authoritativeness, and

Uncategorized

SMX Now: The automation drift and how to correct course

Understanding the Paradox of Modern Google Ads Automation The landscape of digital advertising has shifted dramatically over the last decade. We have moved from a world of manual keyword bidding and granular control to an era dominated by machine learning, artificial intelligence, and automated bidding strategies. Google Ads, in particular, has leaned heavily into “Smart” features, promising advertisers that the algorithm can find the right customer at the right time more efficiently than any human ever could. However, a dangerous phenomenon has emerged alongside these advancements: automation drift. Automation drift occurs when the machine learning models driving your campaigns begin to optimize for metrics that do not align with your actual business goals. Because these systems are designed to find the path of least resistance to a “conversion,” they often find loopholes in your settings. They might chase cheap, low-quality leads or serve ads to audiences that have no intention of purchasing, simply because those actions satisfy the algorithm’s internal logic. The upcoming SMX Now session, featuring Ameet Khabra of Hop Skip Media, dives deep into this reality. As Khabra points out, automation doesn’t fail because it’s broken; it fails because it does exactly what it is trained to do. If the signals provided to the machine are incomplete or misaligned, the machine will “drift” away from profitability while reporting record-breaking numbers. The Mirage of Success: When 417% More Conversions Mean Less Revenue One of the most compelling aspects of the upcoming SMX Now discussion is the case study of a specific account that experienced a staggering 417% jump in conversions. On paper, any digital marketer would celebrate such a statistic. In a typical reporting dashboard, a triple-digit increase in conversion volume usually signals a massive win for the brand and the agency. But in this instance, the success was an illusion. While the conversion count skyrocketed, the actual business revenue did not follow suit. The automation had discovered a way to generate “conversions” that were technically valid according to the tracking pixels but were practically useless to the sales team. This scenario is becoming increasingly common. When Google Ads is given a broad mandate to “maximize conversions,” it will look for the cheapest conversions possible. If your tracking is set to count a “Contact Us” page visit as a conversion, or if it doesn’t distinguish between a high-value lead and a spam bot filling out a form, the algorithm will flood the account with the latter. It is the ultimate example of the “Garbage In, Garbage Out” (GIGO) principle. To the machine, a conversion is a conversion. To the business, those 417% additional conversions were simply noise that wasted budget and resources. The Four Pillars of Automation Drift To combat this issue, advertisers must understand the four specific ways that automation drift manifests within an account. By categorizing the drift, marketers can develop specific interventions to pull the algorithm back on track. 1. Signal Drift Signal drift is perhaps the most fundamental threat to a successful campaign. This happens when the data being fed back into Google Ads—the “signals”—do not accurately reflect the value of the customer. If you are bidding based on a simple conversion pixel without accounting for lead quality or offline sales, you are experiencing signal drift. The algorithm starts to favor users who are “click-happy” or likely to convert on a soft offer, rather than users who are likely to become long-term, high-value clients. Correcting signal drift requires implementing sophisticated tracking methods, such as Enhanced Conversions, Offline Conversion Tracking (OCT), and Value-Based Bidding, to ensure the machine knows which wins actually matter. 2. Query Drift Query drift is a direct result of the industry’s move toward Broad Match and the expansion of “close variants.” In the past, a keyword like “luxury watches” would trigger ads for exactly that. Today, Google’s semantic understanding might decide that “cheap digital clocks” or “watch repair near me” are close enough. While the intent might seem related to the algorithm, the commercial intent is vastly different. Query drift happens when the automation begins to bid on terms that are tangentially related but do not convert at a profitable rate. Without a robust negative keyword strategy and a constant eye on the Search Terms Report, your budget can quickly be swallowed by irrelevant traffic that the machine mistakenly believes is relevant. 3. Inventory Drift As Google introduces more “black box” campaign types like Performance Max (PMax), advertisers have less control over where their ads actually appear. Inventory drift occurs when your ads migrate from high-intent locations (like the Search results page) to lower-quality placements across the Display Network, YouTube Shorts, or mobile apps. We have all seen the reports of ads appearing in the middle of mobile games or on “made-for-advertising” websites. If the algorithm finds that it can get a “conversion” (like a view or a cheap click) more easily on a flashlight app than on a premium search result, it will shift your budget there. This drift dilutes brand equity and often results in accidental clicks that the system misinterprets as genuine interest. 4. Creative Drift With the rise of Responsive Search Ads (RSAs) and automated asset generation, the machine now has the power to mix and match headlines, descriptions, and images. Creative drift occurs when the combinations generated by the AI lose their marketing punch, fail to adhere to brand guidelines, or become repetitive and nonsensical. While Google’s AI tests various combinations to see which gets the highest Click-Through Rate (CTR), a high CTR does not always mean a high-quality user. Sometimes, a provocative or “clickbaity” headline combination created by the AI might drive traffic that has no intention of buying, leading to a high bounce rate and wasted spend. Diagnosing Drift: How to Spot the Warning Signs Early Detecting automation drift before it drains your quarterly budget requires a proactive approach to account management. You cannot simply “set it and forget it.” Advertisers need to implement a framework for regular audits that go beyond the surface-level

Uncategorized

Google adds campaign-level filtering to bulk ad review appeals

Google adds campaign-level filtering to bulk ad review appeals In the complex and often frustrating world of digital advertising, few things disrupt a marketing strategy more than a sudden wave of ad disapprovals. For years, search engine marketing (SEM) professionals and agency managers have navigated a rigid appeal process that often felt like using a sledgehammer when a scalpel was required. Recognizing this friction, Google has introduced a significant workflow update: campaign-level filtering for bulk ad review appeals. This update provides advertisers with the granular control they have long requested. Instead of being forced into an “all-or-nothing” approach when appealing policy violations, users can now isolate specific campaigns for review. While it may seem like a minor administrative tweak, for those managing large-scale accounts with thousands of active creatives, it represents a major shift toward operational efficiency and precision. Understanding the Shift in the Bulk Appeal Process To appreciate why this update is a welcome change, one must look at how the appeal process functioned previously. When Google’s automated systems—or occasionally human reviewers—flagged ads for policy violations, advertisers had limited options for bulk remediation. If an account suffered from widespread disapprovals due to a shared landing page issue or a misunderstood keyword, the advertiser typically had to appeal the entire account’s eligible ads at once. This “blanket” approach presented several challenges. First, it often included ads from legacy or paused campaigns that the advertiser had no intention of reviving, cluttering the review queue. Second, it made it difficult to track which specific fixes were working. If an advertiser attempted to fix ads in Campaign A but wasn’t quite ready to submit Campaign B, they were often stuck in a bottleneck. The new “Select eligible campaigns” option removes these hurdles entirely. How the New Campaign Selector Works The new functionality is integrated directly into the Google Ads policy violations interface. When an advertiser navigates to the Policy Manager to address disapprovals, they are now greeted with a more refined workflow. Instead of a single button to “Appeal All,” there is a dedicated option to select specific campaigns that are eligible for a re-review. When you click “Select eligible campaigns,” a list of campaigns containing disapproved ads appears. Advertisers can then check the boxes for the specific campaigns they have updated or verified for compliance. Once the selection is confirmed, only the ads within those specific parameters are sent back to Google’s policy team for review. This ensures that the review team’s time is spent on ads that have actually been modified to meet guidelines, rather than wasting resources on ads that will likely be rejected again. The Strategic Importance of Granular Control For high-volume advertisers and agencies, the ability to filter appeals by campaign level offers several strategic advantages. Digital marketing is no longer just about bidding; it is about managing the technical health of an account. This update directly impacts three key areas: time management, data integrity, and agency-client relationships. 1. Drastic Reduction in Workflow Friction Time is the most valuable currency in the tech and gaming industries, where product launches and seasonal events dictate the pace of work. Before this update, an advertiser who fixed an error in a high-priority “New Release” campaign might have been forced to wait while the system processed appeals for hundreds of unrelated, low-priority ads across the account. By filtering for the specific campaign that matters most, advertisers can prioritize their most lucrative traffic sources and get them back online faster. 2. Improved Precision and Testing In many cases, an ad disapproval is not the result of a clear violation but rather a “grey area” interpretation of Google’s ever-evolving policies. Advertisers often use a trial-and-error approach to see what wording or landing page elements will pass the automated scanners. With campaign-level filtering, an advertiser can run a “test” appeal on a single campaign to see if their fix is successful before rolling it out to the rest of the account. This prevents the entire account from being flagged for repeated failed appeals, which can sometimes lead to more severe account-level penalties. 3. Cleaner Account Management for Agencies Agencies managing “Master Climate Control” (MCC) accounts or large enterprise clients often have different team members responsible for different product lines or regions. If a specialist in the “Gaming Hardware” division fixes their ads, they shouldn’t have to inadvertently trigger a review for the “Software Subscriptions” division’s ads if that team hasn’t finished their edits. The new filtering system allows for a modular workflow where teams can work independently without interfering with each other’s submission schedules. Why Bulk Disapprovals Happen: The Context The timing of this update is particularly relevant given the increasing frequency of “false positive” disapprovals. As Google relies more heavily on AI and machine learning to police its platform, the system occasionally experiences waves of unexplained disapprovals. Recently, many advertisers reported that perfectly compliant ads were suddenly flagged for “Malicious Software” or “Government Documents and Official Services” violations due to glitches in the automated detection algorithms. When these widespread issues occur, the ability to bulk appeal is essential. However, because these glitches often affect different campaigns in different ways, having the ability to segment the response is vital. Advertisers can now separate the ads they know are compliant (and were likely flagged in error) from those that might actually need a landing page update. Common Policy Hurdles in Tech and Gaming In the tech and gaming sectors, ad disapprovals are common due to the specific nature of the products. Some frequent triggers include: Trademarks: Using brand names of consoles or competitors in ad copy. Destination Requirements: Breaking links or landing pages that don’t meet Google’s speed and transparency standards. Restricted Content: Ads for games that feature gambling-like mechanics or loot boxes, which are subject to varying regional laws. Misrepresentation: Claims about “free” hardware or software that are not clearly substantiated on the landing page. With campaign-level filtering, if a gaming company is running a campaign for a Mature-rated title and a separate campaign

Uncategorized

Your homepage matters again for SEO — here’s why

In the early days of the commercial internet, website architecture was a relatively straightforward affair. Designers and developers operated under the “filing cabinet” model, where a website was built around a single, grand entryway: the homepage. This “front door” served as the primary point of contact for every visitor. Whether a user was looking for a specific product, a company’s history, or technical support, they almost always started at the top and navigated through a hierarchical structure to find their destination. Then, the SEO revolution changed everything. As search engine algorithms became more sophisticated, the way people accessed the web shifted from a linear path to a fragmented one. Suddenly, every single page on a website had the potential to be a landing page. High-quality blog posts, specific product descriptions, and niche landing pages became the new “front doors.” Users no longer needed to enter through the homepage; they could be dropped directly into the heart of a site, landing on the exact piece of content that satisfied their specific query. For nearly two decades, digital marketers and SEO professionals have focused their energy on these “deep links.” We optimized for the long-tail, built complex internal linking structures to route users from informational blog posts to high-conversion product pages, and often treated the homepage as a mere brand placeholder or a navigational hub for those who already knew who we were. However, we are now entering a new era. Driven by the rapid adoption of Artificial Intelligence (AI) and Large Language Models (LLMs), the pendulum is swinging back. Your homepage is becoming the most critical asset in your SEO strategy once again. How SEO inverted web design To understand why the homepage is regaining its throne, we must first look at how the SEO industry transformed web design in the early 2000s. As Google rose to dominance, those of us in the field had to adapt our understanding of information architecture (IA). We took the traditional principles of IA and layered them with SEO-centric thinking. This shift inverted the standard route through a website. Instead of a top-down approach, we created a “spidery maze” of entry points. The goal was to rank for “money terms”—specific, high-intent keywords—on dedicated inner pages. By mapping long-tail keywords to blog posts or category pages, we could meet users exactly where they were in the buyer’s journey. This approach was highly effective: it bypassed the general nature of the homepage and funneled users directly toward the specific product or service they were searching for. In this environment, the homepage became less of a “must-be-everything-to-everyone” battleground. It was allowed to focus on broad brand messaging and general keywords, while the heavy lifting of lead generation and sales was distributed across hundreds or thousands of deeper pages. We stopped worrying about the homepage as the primary driver of traffic, focusing instead on the reverse-conversion paths that turned blog readers into customers. But as AI tools begin to dominate the research phase of the consumer journey, this decentralized model is facing a major disruption. The great AI reversal The informational long-tail traffic that once sustained deep-link landing pages is being swallowed by AI. Tools like ChatGPT, Claude, Perplexity, and Google’s own Gemini are fundamentally changing how users interact with information. When a user has a question, they no longer need to click through a list of search results to find a blog post that explains a concept. AI Overviews and LLMs handle the heavy lifting of research, comparison, and summarization directly within the search interface. Consider the typical user journey today. Instead of searching for “how to choose a headless CMS” and clicking on three different articles, a user asks an AI tool for a comparison. The AI provides a concise summary of the top players, their pros and cons, and a recommendation based on the user’s specific needs. By the time that user actually decides to visit a website, they aren’t looking for general information anymore—they are looking for a specific brand that the AI has already vetted for them. This shift is fueling a massive resurgence in branded search. Once the AI has convinced the user that your brand is the solution to their problem, the user doesn’t go back to generic queries. They search for your brand name. And when they search for your brand name, they don’t land on a deep-link blog post; they land on your homepage. This is the “great reversal”: the homepage is once again the primary entryway, but it is now receiving “warmed-up” traffic that is ready to convert, provided the site’s architecture doesn’t get in the way. The problem: The erosion of the deep link For years, the standard SEO funnel looked like this: Upper Funnel: Informational blog posts and guides acting as landing pages to capture broad interest. Mid Funnel: Product or service pages designed to drive leads and provide detailed specifications. Lower Funnel: Case studies, pricing pages, and testimonials that provide the final “nudge” toward a sale. This model is under siege because traditional informational click-through rates (CTR) are declining. If a search engine can answer a query like “What are the benefits of a headless CMS?” with a 300-word AI-generated summary, the user has no reason to click on your “Ultimate Guide to Headless CMS” blog post. Your informational content is still being used—AI agents are crawling it to generate their answers—but you aren’t getting the direct traffic you once did. The consequence is a loss of segmentation and context. When a user lands on a deep page, you know exactly what they want because of the keyword that brought them there. When they land on your homepage via a branded search, you know they are interested in you, but you don’t necessarily know *why*. If your information architecture isn’t designed to greet these motivated users and quickly funnel them to the right place, you will lose them to a competitor who makes the process easier. The psychology of AI: The path of least resistance

Uncategorized

Agentic engine optimization: Google AI director outlines new content playbook

Understanding the Shift: What is Agentic Engine Optimization? The landscape of digital content is undergoing its most significant transformation since the invention of the search engine. For decades, Search Engine Optimization (SEO) has been the primary framework for how information is organized, discovered, and consumed online. However, as artificial intelligence transitions from simple chatbots to autonomous “agents,” a new discipline is emerging. Addy Osmani, a Director of Engineering at Google Cloud AI, has recently introduced a new framework called Agentic Engine Optimization (AEO). While the acronym is sometimes shared with “Answer Engine Optimization,” Osmani’s definition is distinct and far more technical. It refers specifically to the process of making web content usable, parsable, and actionable for AI agents—autonomous systems designed to fetch, analyze, and execute tasks on behalf of a user. In this new paradigm, the target audience is no longer just a human reader scrolling through a browser. Instead, the audience is an agent that skips the user interface entirely, extracting raw data to complete a multi-step workflow. This shift demands a complete rethink of how we structure, format, and deliver content. How AI Agents Are Redefining the Web Experience To understand AEO, one must first understand the behavior of an AI agent. Unlike a traditional human user, an agent does not “browse.” It does not appreciate high-resolution hero images, it does not click on internal links to explore a brand’s story, and it certainly does not engage with “sticky” navigation or pop-up newsletters. AI agents collapse the traditional browsing experience into a single request. If a user asks an agent to “find the best shipping rates for a 5lb package and generate a comparison table,” the agent identifies relevant sources, extracts the specific pricing data, and returns the final result. Because of this, traditional engagement metrics—such as bounce rate, time on page, and scroll depth—become secondary or even irrelevant. If an agent visits your site, it intends to extract value in milliseconds. If your site structure prevents that extraction, the agent will move on to a competitor’s site that is better optimized for machine readability. The Token Economy: The New Currency of Content One of the most critical insights from Osmani’s guidance is the role of the “token.” In the world of Large Language Models (LLMs), text is processed in chunks called tokens. Every AI model has a “context window,” which is the maximum number of tokens it can process at one time. Osmani highlights that token limits are a primary constraint shaping content performance. When a webpage is too wordy, filled with unnecessary “fluff,” or structurally complex, it consumes a large portion of the agent’s context window. This leads to three significant problems: 1. Truncated Information If an agent’s context window is filled with your site’s header navigation, sidebar links, and a 500-word introductory anecdote, it may run out of space before it ever reaches the actual data it needs. This results in the agent “dropping” the most important parts of your content. 2. Skipped Pages Agents are designed for efficiency. If a page appears too dense or computationally “expensive” to parse without a clear payoff, the agent may simply skip the page entirely in favor of a more concise source. 3. Hallucinated Outputs When an agent is forced to work with truncated or fragmented data due to token limits, the likelihood of “hallucination”—where the AI fills in the gaps with incorrect information—increases dramatically. By providing concise, token-efficient content, you reduce the risk of an AI misrepresenting your brand or data. Consequently, token count is becoming a primary optimization metric, much like page load speed or keyword density used to be. Restructuring Content for Machine Patience For years, SEO experts have debated the value of “long-form content.” While long-form remains valuable for human readers who want deep dives, AI agents have what Osmani describes as “limited patience.” To optimize for these agents, content creators must adopt a “Front-Loaded” strategy. The First 500 Tokens Osmani recommends placing the core answers or data points as early as possible—ideally within the first 500 tokens of a page. This ensures that even if the agent has a limited context window, it captures the most vital information immediately. The End of the “Burying the Lead” In traditional blogging, it is common to use a “hook” or a long preamble to build rapport with the reader. For AEO, this is counterproductive. Agents want structured data, clear definitions, and direct answers. Subheadings should be descriptive and functional, and paragraphs should be compact and focused on a single concept. Markdown: The Language of the Agentic Web Perhaps the most technical recommendation in the AEO playbook is the move toward Markdown over HTML. While HTML is the foundation of the visual web, it is inherently “noisy.” A single paragraph of text in HTML is often wrapped in dozens of lines of code, including div tags, classes, styles, and scripts. For an AI agent, this code is digital clutter. It costs tokens to process and makes parsing more difficult. Osmani suggests that businesses should consider serving clean Markdown (.md) versions of their pages alongside their traditional HTML versions. The Benefits of Markdown for AEO Markdown is lightweight and focuses entirely on content hierarchy. It uses simple symbols to denote headings, lists, and tables, which LLMs are natively designed to understand. By making .md versions of documentation or data directly accessible, you provide a “high-speed lane” for AI agents. This doesn’t mean deleting your website’s design. Instead, it involves creating a parallel, machine-readable infrastructure. This could be as simple as providing a “View as Markdown” link or using server-side logic to detect an AI crawler and serve it a simplified version of the content. Discovery and Structure: The New Standards Just as SEO has sitemaps and robots.txt, Agentic Engine Optimization is seeing the emergence of new standards designed to help agents navigate codebases and content libraries. Osmani points to several files that act as “shortcuts” for AI systems: llms.txt A proposed standard, the llms.txt file serves as

Uncategorized

The PACT framework for PPC: How to move beyond ‘it depends’

The Problem with “It Depends” in Paid Search In the world of Pay-Per-Click (PPC) advertising, there is a phrase that has become a universal shield for experts facing difficult questions. Whether it is a high-stakes client meeting, a session at a major marketing conference, or a thread on a digital marketing forum, you will inevitably hear those two words: “It depends.” Usually accompanied by a knowing nod or a sympathetic smile, this phrase is the ultimate conversation stopper. While technically accurate—because digital marketing is indeed a landscape of variables—it offers absolutely zero utility to the person asking the question. It is a placeholder for an answer rather than an answer itself. As the industry evolves and data becomes more accessible, the “it depends” excuse is increasingly viewed as a professional cop-out. This issue isn’t exclusive to PPC. SEO pioneer Aleyda Solis famously called out this exact pattern in the search engine optimization community, noting that it has become an industry-wide epidemic. Whether you are managing Google Ads, social media campaigns, or organic search strategies, the refusal to provide concrete guidance under the guise of “complexity” hinders progress and erodes trust between specialists and stakeholders. Why We Default to the “It Depends” Cop-Out To move beyond this phrase, we first have to understand why we use it. Not every question in PPC is equally difficult to answer. We can generally categorize queries based on their complexity and the amount of data required to provide a meaningful response. Usually, “it depends” is reserved for the hardest questions because the stakes of being wrong are higher. Consider the spectrum of PPC questions: Simple factual questions: “What is the maximum number of Responsive Search Ads (RSAs) per ad group?” This requires no interpretation; you simply look up the current Google Ads documentation. Data-driven interpretations: “Why did my Cost Per Acquisition (CPA) spike last week?” This requires looking at the data and applying a layer of interpretation to identify the cause. Predictive queries: “What will my Return on Ad Spend (ROAS) look like if I increase the monthly budget by 30%?” This requires data, interpretation, and an understanding of market context and diminishing returns. Strategic Prescriptions: “What bid strategy should I use for a new product launch?” This is the peak of complexity. It requires data, interpretation, context, and a deep understanding of the business’s specific priorities and risk tolerance. The more variables involved, the more an expert feels the need to hedge. However, being an expert means having the ability to navigate that complexity for the client. That is where the PACT framework comes into play. Introducing the PACT Framework: A Strategic Alternative The PACT framework is designed to replace “it depends” with structured, actionable insights. PACT stands for Process, Anchors, Conditions, and Trade-offs. This framework assumes that you are providing advice in a context where you may not have the asker’s live data immediately in front of you—such as during a presentation or a preliminary discovery call. Even without a live dashboard, the PACT framework allows you to provide an answer that is 100% more useful than a simple “it depends.” P: Process – Providing a Structured Path to the Answer For diagnostic and prescriptive questions, the most valuable thing you can give someone is a map. If you cannot give them the final answer because you lack their specific data, you can give them the exact process you would use to find that answer. As David Rodnitzky famously noted, an agency without a process is just a collection of individuals running around doing things. High-level PPC management requires repeatable structures. When a client asks a “why” or “should I” question, your response should be a walk-through of your internal methodology. The Power of Flowcharts and Decision Trees Visual aids are incredibly effective at breaking down the “it depends” wall. One of the most legendary examples in the industry is the Rimm-Kaufman Group’s (now Merkle) performance troubleshooting flowchart from their Dossier 3.2. It took the massive, daunting question of “Why did my performance drop?” and turned it into a series of binary “Yes/No” checkpoints. By providing a flowchart, you shift the conversation from a vague mystery to a logical investigation. You can show the user how to check for technical errors, then competitive shifts, then seasonal trends, and finally landing page issues. Similarly, for “Should I?” questions, decision trees—like those used by Aleyda Solis for SEO decision-making—help stakeholders visualize the logic behind a strategic pivot. A: Anchors – Grounding the Conversation with Data and Examples An “anchor” is a piece of evidence-based data that provides a baseline for the conversation. Instead of saying a result “depends” on the industry, you provide the industry standards and explain how they vary. This grounds the hypothetical in reality. Using Benchmarks Effectively Benchmarks are the most common form of anchors. If someone asks what a “good” conversion rate is for an e-commerce store, “it depends” is technically true, but saying “The average for health and beauty is 3.3%, while electronics usually sits around 1.9%” provides immediate value. The more specific the benchmark (segmented by industry, platform, or region), the more authoritative your answer becomes. The “Usual Suspects” and the 80/20 Rule In many PPC scenarios, the Pareto Principle applies: 80% of problems are caused by 20% of the variables. Instead of a 50-step process, you can offer a “Usual Suspects” list. If a CPA spikes, you can say: “Usually, it’s one of these five things: a change in tracking, a new competitor entering the auction, a budget cap being hit, a negative keyword conflict, or a landing page error. Check these first.” This gives the asker a high-probability starting point. The Weight of Case Studies Real-world examples are powerful anchors. If a client asks what will happen if they consolidate their campaigns, you can share a specific (anonymized) result: “In a recent account spending $50k a month, we consolidated 12 campaigns into four. We saw a 20% improvement in CPA after the initial 14-day learning period,

Scroll to Top