Author name: aftabkhannewemail@gmail.com

Uncategorized

The perfect local business contact page built for Google and conversions

When most business owners think about their website’s contact page, they view it as a necessary but boring utility. It is often the last page designed, usually featuring nothing more than a generic “Get in Touch” headline, a standard contact form, and perhaps a phone number. This approach is a significant missed opportunity. In the world of modern SEO and conversion rate optimization (CRO), your contact page is one of the most powerful tools in your digital arsenal. For a local business, the contact page is not just a place for people to find your phone number; it is a critical data source for search engines. Google uses this page to verify your business’s existence, location, and legitimacy. If you provide minimal data, you are essentially telling Google you don’t want to be found. By transforming this page into a robust asset, you can boost your prominence in local search results and significantly increase the percentage of visitors who turn into leads. Why Google Pays Special Attention to Your Contact Page The importance of the contact page is not just a theory; it comes from the heart of Google’s own local search operations. Joel Headley, a former head of Google Business Profile (formerly Google My Business) Support, has noted that Google specifically crawls and parses contact pages to gather “entity” information. They are looking for signals that verify your business name, address, and phone number (NAP) against other data points across the web. When Google crawls your site, it isn’t just looking at your blog posts or service descriptions. It is looking for the “Source of Truth” regarding your physical location and operational hours. Most businesses fail this test by offering “thin content” on their contact page. By providing a rich set of data, you are making it easier for Google’s algorithms to trust your business, which directly correlates to better rankings in the Local Map Pack. To build a contact page that satisfies both Google’s bots and human visitors, you need to treat it with the same level of care as a high-stakes landing page. This means incorporating identity, trust, location relevance, and clear calls to action. 1. Establishing a Strong Business Identity Your contact page should never feel like a disconnected part of your website. It must reinforce your brand identity immediately. From a local SEO perspective, this helps search engines connect your digital presence with your physical “entity.” Consistent Branding and Visuals Ensure that your business logo is prominent and matches the signage at your physical location. This visual consistency helps customers who may have seen your storefront in person feel they are in the right place. Additionally, include your slogan or a brief value statement. If your slogan includes a natural keyword—such as “Chicago’s Leading Residential Electrician”—it provides an extra SEO nudge without looking like keyword stuffing. The Introduction and UVP Don’t jump straight into the form. Start with a short, welcoming introduction. Explain what your business does and where you are located. More importantly, reiterate your Unique Value Proposition (UVP). Why should someone contact you instead of the competitor down the street? Whether it is “24/7 emergency service” or “Family-owned for 40 years,” this brief copy sets the tone for the interaction. 2. Providing Complete and Actionable Contact Information It sounds obvious, but many businesses miss the mark on basic contact details. Accuracy is the foundation of local SEO. Any discrepancy between the address on your contact page and the address on your Google Business Profile can lead to a drop in rankings because it creates “data friction” for the search engine. The Essentials of NAP Ensure your Name, Address, and Phone number are written in a way that is easy for bots to crawl (avoid putting this information inside an image). You should also include a direct email address alongside your form. Some users prefer the transparency of a direct email over a web form, and offering both caters to different user preferences. Expanding Communication Options Modern consumers often prefer texting over calling. If your business line is text-enabled, clearly state “Call or Text us at [Number].” Additionally, list your social media profiles. This doesn’t just provide another way to connect; it shows Google that you have a multi-faceted digital presence, which adds to your business’s authority. Optimizing Hours of Operation Include your standard hours of operation, but don’t stop there. Mention holiday hours or seasonal changes. If you offer specific shopping options—such as curbside pickup, delivery, or “by appointment only”—list these clearly. This information is frequently used by Google to answer specific user queries like “stores open now near me.” 3. The Art of the Google Maps Embed Nearly every local business embeds a map, but most do it incorrectly. A common mistake is embedding a map of a physical address rather than a map of the specific Google Business Profile listing. How to Embed the Right Map To do this correctly, go to Google Maps and search for your specific business name, not just your street address. Once your profile appears, click the “Share” button, select “Embed a map,” and copy that code. When you embed the profile-specific map, every interaction a user has with that map—zooming in, clicking for directions—sends engagement signals directly to your Google Business Profile. These signals are a known factor in improving your local ranking. Driving Direction Links Consider adding a text link that says “Get Driving Directions” which leads directly to your Google Maps listing. There is evidence suggesting that the frequency of users requesting directions to your business is a potent ranking signal. By making it easy for users to trigger that request from your contact page, you are actively encouraging a behavior that helps your SEO. 4. Building Trust with Social Proof By the time a visitor reaches your contact page, they are likely considering hiring you or buying from you. They are looking for one final “nudge” to confirm they are making the right choice. This is where social proof becomes your

Uncategorized

How to write paid search ads that outperform your competitors

In the high-stakes world of Pay-Per-Click (PPC) advertising, the battle for the top spot on the Search Engine Results Page (SERP) is more intense than ever. With Google and Microsoft constantly evolving their algorithms and introducing new automated features, many advertisers have fallen into a trap of complacency. They set up their campaigns, let the machine learning take over, and rarely look back at the actual words appearing in front of their potential customers. The reality is that your paid search ads do not exist in a vacuum. They are positioned directly against three or four other competitors, all vying for the same limited attention span of the user. If your copy is generic, repetitive, or lacks a clear value proposition, you are essentially handing market share to your rivals. To truly outperform the competition, you must approach ad copywriting with a mix of data-driven strategy and creative psychological triggers. How often do you step back and view your ads through the eyes of a consumer? Do your headlines blend into a sea of “Best Service” and “Quality Products,” or do they offer something tangible that demands a click? Let’s explore the essential strategies for writing paid search ads that don’t just show up, but win. 1. Think about how assets will appear together, not just individually With the transition to Responsive Search Ads (RSAs) as the industry standard, the way we write ads has fundamentally changed. Gone are the days of static Expanded Text Ads where you knew exactly which Headline 1 would pair with which Headline 2. Today, Google’s machine learning takes up to 15 headlines and four descriptions and mixes them into thousands of possible combinations. The mistake many digital marketers make is treating these 15 headline slots as a checklist to be filled with variations of the same keyword. If you provide headlines like “Project Management Software,” “Project Management Solution,” and “Top Project Management,” there is a high probability that Google will display them together. The result? A redundant, unprofessional-looking ad: “Project Management Software – Project Management Solution – Project Management.” To avoid this, you must treat each asset as a unique building block. Instead of repeating your primary keyword in every slot, categorize your headlines into three buckets: keywords, social proof/benefits, and calls to action (CTAs). For example, a successful mix might look like this: Headline 1: Project Management Software Headline 2: Trusted by 3 Million Users Headline 3: Try It Free for 14 Days If you want to maintain control over your brand’s messaging while still utilizing RSA technology, use the “pinning” feature. By pinning a headline to Position 1, you ensure your primary keyword always appears first, while letting the algorithm test different social proof or CTA headlines in Positions 2 and 3. This ensures variety and prevents the “bland and repetitive” trap that plagues so many modern PPC campaigns. 2. Don’t obsess over ad strength Google Ads prominently displays an “Ad Strength” rating—ranging from “Poor” to “Excellent”—as you build your ads. While this metric is intended to be a helpful guide, it is often misunderstood as a definitive indicator of performance. Many advertisers waste hours chasing an “Excellent” rating by adding every suggested keyword and filling every single available character, often at the expense of clear, persuasive copy. It is important to remember that Ad Strength is a measure of relevance and diversity of assets, not a prediction of conversion rates. An ad can have “Excellent” strength because it includes 15 unique headlines, but if those headlines are confusing or off-brand, it won’t convert. Conversely, a “Good” or even “Average” ad that uses pinned headlines to ensure a specific, high-converting value proposition is shown can often outperform a more diverse, unpinned ad. Focus on quality over quantity. Ensure your headlines speak accurately to your user’s pain points. If pinning a specific headline to Position 1 drops your ad strength from “Excellent” to “Good,” but that headline is your strongest selling point, keep it pinned. The goal is to convert the user, not to please the Google Ads interface. 3. Use AI as a partner, but don’t blindly outsource all your copy to AI Generative AI has revolutionized the speed at which we can create content. Both Google and Microsoft now offer integrated AI tools that can generate ad assets with a single click. Furthermore, Large Language Models (LLMs) like ChatGPT or Claude can spin up hundreds of ad variations in seconds. However, the “set it and forget it” approach to AI copy is a recipe for mediocrity. AI tools excel at overcoming writer’s block and suggesting synonyms, but they lack the nuanced understanding of your specific brand voice and the current market landscape. AI-generated copy can often feel “hallucinated” or generic. It might use phrases that your target audience doesn’t actually use, or worse, it might make claims that are factually inaccurate. The human touch is particularly vital in highly regulated industries such as healthcare, finance, or legal services. AI models are not always up to date with the latest compliance requirements or legal disclaimers required in your ad copy. Use AI to brainstorm, to find new ways to phrase a benefit, or to shorten a headline that is two characters too long. But always review, edit, and fact-check every line before it goes live. You are the expert on your brand; the AI is just your assistant. 4. Include value propositions and back them up In a world of empty superlatives, specificity is your greatest weapon. Every advertiser claims to be the “best,” “fastest,” or “cheapest.” These words have become white noise to the modern consumer. To stand out, you need to provide concrete evidence for your claims. Instead of saying you are the “Top Local Contractor,” try “Voted Best Local Contractor 2024 by [Local News Outlet].” This adds an external layer of credibility that a self-proclaimed title lacks. Numbers are particularly effective at catching the eye and building trust. Incorporate data points that highlight your scale and experience: Longevity:

Uncategorized

Are Citations In AI Search Affected By Google Organic Visibility Changes? via @sejournal, @lilyraynyc

The Evolution of Search and the Rise of AI Citations The digital marketing landscape is currently undergoing its most significant transformation since the invention of the search engine itself. For decades, the primary goal of Search Engine Optimization (SEO) was to secure a position in the “ten blue links” on the first page of Google. However, with the emergence of Large Language Models (LLMs) and generative AI search tools like ChatGPT, Perplexity, and Google’s own AI Overviews, the metric for success is shifting. Today, visibility is increasingly defined by “citations”—the references and links provided by AI models when they answer user queries. As these AI tools become more integrated into the daily search habits of millions, a critical question has emerged among SEO professionals and digital publishers: Is there a direct link between traditional organic search performance and AI citation frequency? Recent research, including a notable analysis by Lily Ray, suggests that the answer is a resounding yes. There appears to be a profound correlation between a website’s health in Google’s organic index and its ability to be cited as a source by generative AI. This relationship suggests that the foundational principles of SEO—authority, relevance, and helpfulness—are not just relics of the past but are the very pillars that support visibility in the future of AI-driven discovery. The Direct Link: Analyzing the Correlation The core of the recent investigation into AI visibility centered on an analysis of 11 specific websites. These sites were selected because they had all experienced significant declines in organic visibility following major Google algorithm updates, such as the Helpful Content Update (HCU) and various Core Updates. By tracking how these sites performed in AI search environments during their period of decline in Google, a clear pattern emerged. When a website loses its “trust” or ranking power in Google’s eyes, it simultaneously begins to vanish from the citation lists of AI search engines. This trend was observed most aggressively in ChatGPT’s search capabilities. As these 11 sites saw their organic traffic from Google crater, their presence as sources for ChatGPT’s responses dropped in near-unison. This correlation is not a coincidence. It reflects the technical reality of how AI search engines function. While an LLM like GPT-4 is trained on a massive static dataset, modern “AI search” features rely on Retrieval-Augmented Generation (RAG). This process involves the AI searching the live web to find the most relevant, high-quality information to satisfy a user’s prompt. If a site is no longer deemed authoritative or “helpful” by the primary gatekeepers of the web (search engines), the AI tools that use those search indexes as their source material will naturally stop citing them. How AI Search Engines Source Information To understand why Google visibility impacts AI citations, one must understand how AI search engines “read” the internet. Tools like ChatGPT (with Search), Perplexity AI, and Google AI Overviews do not simply guess the answers. They operate as sophisticated aggregators. When a user asks a complex question, the AI performs a search—often using existing search engine APIs like Bing or Google—to retrieve a set of documents. It then synthesizes the information from those documents into a natural language response. The websites that appear at the top of these real-time search results are the ones most likely to be cited by the AI. Therefore, if a website is penalized or demoted in traditional search results, it essentially becomes invisible to the RAG process. If you aren’t on the first page of the search results that the AI “reads,” you won’t be included in the AI’s summary. This creates a double-jeopardy scenario for publishers: a loss in Google rankings leads to a simultaneous loss in AI referral traffic and brand mentions. The Impact of Google’s Helpful Content Updates The 11 sites analyzed were primarily victims of Google’s shift toward prioritizing “helpful content” created for humans rather than search engines. Over the past two years, Google has refined its ability to identify sites that exist primarily to capture search traffic through mass-produced, low-value, or overly optimized content. When the Helpful Content Update (HCU) hits a site, it often results in a site-wide suppression of visibility. The analysis shows that AI models are effectively “inheriting” these quality signals. If Google determines that a site lacks E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness), AI models seem to reach the same conclusion, likely because they rely on Google’s (or Bing’s) index to filter for quality. ChatGPT, in particular, showed the strongest correlation in the study. This suggests that OpenAI’s search integrations are heavily reliant on the authority signals already established by major search engines. For publishers, this means that the “quality” of their content is being judged by a singular standard that governs both traditional and generative search. ChatGPT vs. Perplexity: Different Degrees of Impact While the correlation between Google visibility and AI citations is broad, the degree of impact varies across different platforms. The analysis noted that while ChatGPT showed a very tight correlation with Google’s organic losses, other platforms like Perplexity AI sometimes showed more resilience—though they were not entirely immune. ChatGPT’s search functionality appears to prioritize highly authoritative, “mainstream” sources that are already dominant in search engine result pages (SERPs). When a niche site loses its standing in Google, ChatGPT is quick to replace it with a more “stable” source like Wikipedia, a major news outlet, or a high-authority Reddit thread. Perplexity, on the other hand, occasionally sources from a wider variety of “long-tail” results. However, even in Perplexity, the downward trend for the 11 impacted sites was visible. This indicates that while different AI models have different “sorting” algorithms for their citations, they all rely on the same fundamental data: the searchable web. If a site is excluded from the top tier of the web index, it loses its “sourceability” across the entire AI ecosystem. The Role of E-E-A-T in the AI Era Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T) have been the cornerstone of Google’s search quality evaluator guidelines for years. The recent data

Uncategorized

Google Ads support now requires account change authorization

The Evolution of Google Ads Support The landscape of digital advertising is constantly shifting, not just in terms of algorithms and bidding strategies, but also in how platforms interact with their users. For years, Google Ads has been the cornerstone of many digital marketing strategies, providing businesses with a robust platform to reach potential customers. However, as the platform becomes increasingly automated and complex, the support infrastructure is also undergoing a radical transformation. Advertisers have recently noticed a significant change in the way they interact with Google Ads support. What used to be a straightforward process of submitting a ticket or jumping on a chat has now become a more formal agreement involving account permissions. Specifically, Google Ads support now requires explicit authorization from the advertiser before certain help requests can even be processed. This authorization grants Google specialists the power to access and make changes directly within the advertiser’s account. This development marks a pivotal moment in the relationship between Google and its advertisers. It highlights a growing trend toward deeper platform integration, while simultaneously raising important questions about liability, control, and the future of account management. The New Support Workflow: From AI to Authorization Navigating the Google Ads support system has become a multi-layered experience. The first point of contact for most users is now a beta AI chat interface. This AI-driven assistant is designed to handle common queries, provide links to help documentation, and resolve simple technical issues without the need for human intervention. This shift is part of Google’s broader strategy to integrate artificial intelligence into every facet of its ecosystem, aiming to reduce the volume of tickets handled by human staff. However, many PPC (Pay-Per-Click) specialists and account managers find that their issues are often too complex for an AI bot to solve. When a user decides that the AI chat is insufficient and opts to submit a traditional support form, they are met with a new requirement: a mandatory “Authorisation” checkbox. The wording of this authorization is specific and carries significant weight. By ticking the box, the advertiser is granting a Google Ads specialist permission to act on behalf of the company. This permission allows the specialist to reproduce issues, troubleshoot technical bugs, and, most importantly, make direct changes to the account settings, campaigns, or tracking configurations. Without ticking this box, submitting the support request may be impossible, effectively making account access a prerequisite for receiving human-led technical assistance. Understanding the Fine Print: Liability and Risk The introduction of the authorization checkbox is not just a procedural update; it is a legal and operational shift in responsibility. The fine print associated with this new requirement is clear and unambiguous. Google explicitly states that it does not guarantee specific results from any changes made by its specialists. Furthermore, the advertiser is informed that any adjustments made during the troubleshooting process are conducted at the advertiser’s own risk. This creates a high-stakes environment for businesses, particularly those operating with large budgets or complex account structures. When a Google specialist enters an account to “troubleshoot,” they may adjust bidding strategies, change keyword match types, or modify conversion settings. While these changes are intended to fix an issue, they can have unintended consequences on the account’s performance. Under this new policy, the advertiser remains solely responsible for the impact of these changes. If a specialist’s adjustment leads to a sudden spike in spending or a drop in conversion rates, the financial and performance repercussions fall squarely on the advertiser. This “hands-off” approach to liability from Google’s end means that advertisers must be extremely cautious when requesting help that requires account-level modifications. The Trade-Off: Speed vs. Control For many digital marketers, the core of the issue lies in the trade-off between speed and control. Granting a Google specialist direct access to an account can undoubtedly accelerate the troubleshooting process. Instead of a long back-and-forth exchange of screenshots and instructions, the specialist can see the problem firsthand and apply a fix immediately. In a world where every hour of downtime or misconfiguration can result in lost revenue, this speed is highly valuable. However, this convenience comes at the cost of control. Professional PPC managers take pride in the meticulous calibration of their accounts. Every bid adjustment and negative keyword is often the result of data-driven strategy and hours of testing. Allowing an outside party—even one from Google—to make changes introduces a level of unpredictability. This shift is particularly concerning for agencies that manage accounts on behalf of clients. An agency’s reputation and contract are built on their ability to maintain performance and manage budgets effectively. If a Google specialist makes a change that negatively impacts a client’s ROI, the agency may find itself in a difficult position, having authorized access that led to the decline. The Role of Automation and AI in Support The requirement for account change authorization should be viewed through the lens of Google’s wider push toward automation. In recent years, Google Ads has introduced features like Performance Max, auto-applied recommendations, and broad match expansion, all of which move control away from the individual advertiser and into the hands of Google’s machine learning algorithms. The new support model fits perfectly into this trajectory. By funneling users through an AI chat first and then requiring authorization for human support, Google is streamlining its operations. The goal is likely to minimize the manual labor involved in support while training its AI systems to handle more complex tasks over time. For the advertiser, this means that the “human touch” in support is becoming a premium service that requires a significant concession of account privacy and control. It reflects a future where managing a Google Ads account is less about manual adjustments and more about managing the permissions and parameters within which Google’s own systems and staff operate. Impact on Different Tiers of Advertisers The impact of this change will likely be felt differently across the spectrum of Google Ads users. Small business owners who manage their own accounts may

Uncategorized

A First Look at 2026: Leveraging AI to Boost Lead Handling and Drive Better Results via @sejournal, @hethr_campbell

The Evolution of Lead Management Toward 2026 The digital marketing landscape is shifting at a pace that was once considered impossible. As we look ahead to 2026, the traditional methods of capturing and nurturing leads are becoming relics of the past. For agencies and internal sales teams alike, the challenge is no longer just about generating traffic or filling a database with contact information. The real battleground has shifted toward lead handling—the critical window between interest and conversion. In the coming years, the differentiator between a successful agency and one that stagnates will be the integration of Artificial Intelligence (AI) into the core of their sales operations. We are moving away from reactive lead management and entering an era of proactive, predictive, and hyper-personalized engagement. By 2026, AI will not just be a supplementary tool; it will be the primary engine that drives lead response times, qualifying criteria, and long-term nurturing strategies. The Speed-to-Lead Paradigm Shift For years, the “five-minute rule” has been the gold standard in sales: if you don’t contact a lead within five minutes of their inquiry, the chances of qualifying them drop by 400%. By 2026, five minutes will be considered far too slow. The consumer of the future expects instantaneous gratification. When a potential client submits a form or engages with a chatbot, they expect an immediate, intelligent response that acknowledges their specific needs. AI-driven autonomous agents are now being developed to handle these initial interactions with human-like nuance. Unlike the clunky chatbots of the early 2020s, the AI of 2026 leverages advanced Natural Language Processing (NLP) and real-time data retrieval to answer complex questions, schedule meetings, and even provide preliminary quotes. This ensures that no lead goes cold simply because a human representative was in a meeting or out of the office. Predictive Lead Scoring: Beyond Basic Demographics Traditional lead scoring often relies on static data: job title, company size, or industry. While these metrics are helpful, they often fail to capture the true intent of a prospect. In 2026, AI-driven predictive lead scoring will analyze thousands of data points across the “dark funnel”—those untraceable interactions that occur on social media, third-party review sites, and private communities. By leveraging machine learning algorithms, agencies can identify which leads are most likely to convert based on behavioral patterns rather than just demographic profiles. This allows sales teams to prioritize their energy on “high-intent” prospects while AI handles the mid-to-low-tier leads through automated, value-driven nurturing sequences. This surgical precision in lead handling ensures that marketing budgets are optimized and sales personnel are not wasting time on tire-kickers. Hyper-Personalization at Scale We have all received those “personalized” emails that do nothing more than insert our first name and company into a generic template. In the 2026 sales environment, this level of personalization is no longer sufficient. AI now enables hyper-personalization at a massive scale by synthesizing data from a lead’s recent LinkedIn activity, their company’s latest quarterly earnings report, and their specific pain points expressed during initial site navigation. Imagine a lead management system that automatically drafts a custom outreach video or a bespoke white paper tailored specifically to a prospect’s unique challenges within seconds of them visiting a landing page. This level of relevance builds immediate trust and authority, making it significantly harder for a lead to “go cold.” The goal is to make every prospect feel like they are the agency’s only priority, even if the agency is managing thousands of leads simultaneously. Eliminating the “Leaky Bucket” in Sales Funnels One of the primary reasons leads go cold is the friction inherent in the hand-off between marketing and sales. Often, a lead is generated by a marketing campaign, passed to a CRM, and then sits in a queue until a sales development representative (SDR) picks it up. Each minute that passes represents a leak in the funnel. By 2026, AI will act as the bridge that seals these leaks. Autonomous “middle-ware” AI can monitor CRM activity in real-time. If a lead has not been contacted within a specified timeframe, the AI can initiate a “warm-up” sequence, such as sending a relevant case study or a personalized video message from the account executive assigned to the lead. This ensures that the momentum generated by the initial inquiry is never lost. The Role of Agentic AI in Agency Growth For digital agencies, the pressure to deliver results is higher than ever. Clients are no longer satisfied with “leads generated”; they want to see “revenue closed.” This shift in expectations requires agencies to take a more active role in the lead handling process of their clients. This is where Agentic AI comes into play. Agentic AI refers to AI systems that can take independent action to achieve a goal. Instead of just notifying a client that a lead has arrived, an agency’s AI system can engage the lead, qualify them through a series of discovery questions, and then book a time directly on the client’s calendar. By taking over the heavy lifting of the qualification phase, agencies provide massive value, directly impacting the client’s bottom line and increasing agency retention rates. Data Privacy and Ethical AI Lead Handling As we leverage more powerful AI tools, the importance of data privacy and ethical considerations cannot be overstated. By 2026, regulations like GDPR and CCPA will likely have evolved, requiring even stricter transparency regarding how AI uses personal data to influence sales decisions. Successful lead management strategies must balance the efficiency of AI with a commitment to data security. Consumers will be more willing to engage with AI-driven systems if they know their data is being handled responsibly. Agencies must ensure that their AI models are “clean”—meaning they are trained on compliant data sets and provide clear opt-out options for prospects. Transparency about the use of AI in the sales process can actually become a selling point, demonstrating a brand’s commitment to innovation and modern efficiency. Human-AI Collaboration: The Hybrid Model While AI will handle the bulk of the repetitive

Uncategorized

What it takes to make demand gen work for B2B and ecommerce

Google Ads has undergone a massive transformation over the last several years, shifting from a platform primarily defined by keyword intent to one that embraces the power of visual storytelling and machine learning. At the forefront of this evolution is Demand Gen, a campaign type designed to bridge the gap between traditional search advertising and the high-impact visual nature of social media platforms. For B2B organizations and ecommerce brands, the transition to Demand Gen often feels counterintuitive. Traditional search strategies rely on users telling the platform exactly what they want through a search query. Demand Gen, however, functions on the principle of interruption. It places your brand in front of potential customers while they are engaged with content on YouTube, Gmail, and the Google Discovery feed. To make this work, marketers must abandon the search-first mindset and adopt the strategies of a social advertiser. At the recent SMX Next conference, Jack Hepp, owner of Industrious Marketing, provided a deep dive into the nuances of Demand Gen. He highlighted why many businesses—particularly those in the B2B and lead generation sectors—fail when they first launch these campaigns. By understanding the underlying mechanics of Demand Gen and aligning creative strategy with the customer journey, businesses can unlock a powerful engine for growth that complements their existing search efforts. Understanding the Shift: From Intent to Interruption The fundamental difference between Google Search and Demand Gen lies in the user’s mindset. In Search, the user has “high intent.” They are actively looking for a solution, a product, or information. In this scenario, the text ad serves as the answer to a question. Demand Gen is different. It is an “interruption-based” format. Your target audience isn’t looking for you; they are watching a video on YouTube, checking their inbox, or browsing their personalized news feed. In this environment, visual creative becomes the new keyword. You are no longer bidding on what a person says; you are bidding on who that person is and what visuals will stop them in their tracks. This shift requires a complete re-evaluation of how campaigns are built. If you treat Demand Gen like a standard Display campaign or a Search campaign without keywords, you will likely see poor engagement and wasted spend. Success in Demand Gen is predicated on your ability to capture attention within the first few seconds of an encounter. Common Misalignments in Demand Gen Strategy Many digital marketers approach Demand Gen with baggage from other campaign types. Jack Hepp identified four critical mistakes that often lead to failure: Expecting Bottom-of-Funnel CPAs from Mid-Funnel Traffic Because Demand Gen reaches people earlier in their journey, the Cost Per Acquisition (CPA) for a direct sale or a “Request a Demo” CTA will naturally be higher than it is on Search. Expecting the same efficiency from a cold audience as you get from someone searching for your brand name is a recipe for perceived failure. Using “Spray and Pray” Targeting While Google’s AI is powerful, it still needs a focused starting point. Targeting “everyone interested in technology” is too broad for the algorithm to find meaningful patterns quickly. Without specific guardrails, the campaign will spend heavily on low-quality impressions that never convert. Running Bland, Generic Creative In a visual feed, stock photos and corporate “blue-background” images are invisible. If your creative looks like an ad, people will treat it like an ad and scroll past. Creative that fails to evoke emotion or address a specific pain point will result in a low click-through rate (CTR), which tells Google your content isn’t relevant. Ineffective Optimization Without Negative Keywords Search marketers are used to using negative keyword lists to sculpt their traffic. In Demand Gen, those levers don’t exist in the same way. Marketers who don’t know how to optimize through creative refreshes and audience exclusions often find themselves stuck with stagnating performance. Campaign Structure: Understanding the Hierarchy To master Demand Gen, you must understand how Google organizes these campaigns. The structure is divided into two distinct levels, each serving a specific purpose in the machine-learning process. Campaign-Level Settings The campaign level is where you set the “rules of engagement.” This includes your bidding strategy (such as Maximize Conversions or Target CPA), your primary conversion goals, and your device targeting. Crucially, the campaign level is where the overall budget is often managed, though it’s the ad group level that dictates where that budget actually goes. Ad Group-Level Settings The ad group level is where the “learning” happens. This is where you define your audiences, locations, and specific channel placements. It is vital to note that each ad group learns independently. Insights gained in Ad Group A regarding a specific audience do not automatically transfer to Ad Group B. This allows for precise segmentation. You can test different audience buckets—such as competitors’ website visitors versus your own first-party data—with creative tailored specifically to each group. Creating Interruption-Based Creative In the world of Demand Gen, you have approximately three to four seconds to make an impact. This is known as “stopping the scroll.” If your visual and headline don’t resonate instantly, the user is gone. Your creative should follow a simple but effective framework: The Hook: A bold visual or headline that addresses a specific problem. The Value: A brief explanation of how your product or service solves that problem. The Action: A clear, low-friction call to action (CTA). Unlike search ads, where you might focus on features, Demand Gen creative should focus on outcomes and pain points. For B2B, this might mean highlighting the cost of inaction or a shocking industry statistic. For ecommerce, it might mean showing the product in a lifestyle context that the viewer aspires to. Aligning Visuals to the Customer Journey A major pitfall in Demand Gen is asking for too much too soon. You must match your offer to the “temperature” of the audience. Pushing a high-friction offer, like a 30-minute sales demo, to a cold audience who has never heard of your brand is a strategy built for

Uncategorized

Content scoring tools work, but only for the first gate in Google’s pipeline

In the world of modern SEO, many practitioners operate under a fundamental misunderstanding of how Google processes information. We often treat the search engine as if it were a sentient editor—a digital scholar that reads our articles, appreciates our stylistic nuances, and rewards our expertise through a deep, intelligent comprehension of the text. However, the Department of Justice (DOJ) antitrust trial recently pulled back the curtain on Google’s internal mechanics, revealing a reality that is far more mechanical and tiered than many realized. According to testimony from Google Vice President of Search Pandu Nayak, the initial stage of the search process isn’t driven by cutting-edge generative AI or deep semantic “understanding” in the way we might define it. Instead, it relies on a first-stage retrieval system built on inverted indexes and postings lists—traditional information retrieval methods that have existed for decades. The core of this system is an evolution of Okapi BM25, a lexical retrieval algorithm. This revelation changes how we must view content optimization. The “first gate” your content must pass through is not a neural network; it is a word-matching engine. While Google certainly employs advanced AI further down the pipeline, your content will never even reach those sophisticated models if it fails the mechanical test of the first gate. This is exactly where content scoring tools like Surfer SEO, Clearscope, and MarketMuse find their value—and where they find their limits. How first-stage retrieval works and why content tools map to it To understand why tools like Clearscope or Surfer SEO “work,” you must first understand Best Matching 25 (BM25). This is the retrieval function that anchors Google’s first-stage system. As Pandu Nayak described in court, Google maintains an inverted index that scans postings lists to score topicality across hundreds of billions of pages. In a matter of milliseconds, this system narrows the field from the entire web down to a candidate set of tens of thousands of pages. Content optimization tools are essentially sophisticated mimics of this BM25 logic. They focus on four primary mechanics that define how Google’s first gate operates: Term frequency with saturation One of the most misunderstood aspects of SEO is how many times a keyword should appear. BM25 follows a curve of diminishing returns. The first time you mention a relevant term, you capture roughly 45% of the maximum possible score for that specific term. By the third mention, you have reached about 71% of the scoring potential. However, moving from three mentions to thirty mentions adds almost nothing to your score. This “saturation” is why keyword stuffing is not only annoying to readers but mathematically useless for ranking. Content tools help you find the “sweet spot” where you’ve satisfied the algorithm without over-optimizing. Inverse document frequency (IDF) Not all words are created equal. Rare, highly specific terms carry significantly more weight than common ones. For example, in a query about running gear, the term “pronation” is worth approximately 2.5 times more than the word “shoes.” Because fewer pages contain the word “pronation,” its presence is a much stronger signal to Google that the page is specifically about the technical aspects of running. Content tools use TF-IDF (Term Frequency-Inverse Document Frequency) analysis to highlight these high-value terms that signal topical authority. Document length normalization Google’s scoring algorithms account for the length of a page. If a 500-word article and a 5,000-word article both mention a keyword five times, the shorter article is often considered more “dense” and relevant to that specific term. This is why content tools provide recommended word counts; they are trying to help you maintain a competitive density relative to the pages that are already ranking. The zero-score cliff This is the most critical reason to use optimization tools. In the mechanical world of lexical retrieval, if a specific term does not appear in your document, your score for that term is exactly zero. You are effectively invisible for any query cluster containing that term. If you write a 3,000-word guide on “rhinoplasty” but fail to mention “recovery time,” you may be excluded from the candidate set for users searching for recovery-related information, regardless of your site’s authority. While Google has systems like Neural Matching (RankEmbed) to bridge some gaps, relying on them to “save” an incomplete article is a high-risk strategy. What the research on content tools actually shows The efficacy of content scoring tools has been the subject of several major studies. In 2025, Ahrefs, Originality.ai, and Surfer SEO all conducted research to determine if tool scores correlate with higher rankings. Across 10,000 queries and various keyword sets, the findings were consistent: there is a weak positive correlation, generally falling between 0.10 and 0.32. In the context of search engine variables, a 0.26 correlation is actually quite meaningful, but it requires context. It is important to note that these studies were often conducted by the vendors themselves, and they rarely controlled for massive variables like backlinks, domain authority (DR), or historical click data (NavBoost). The methodology of these tools is fundamentally circular: they analyze the top 10 to 20 pages that are already ranking, identify the patterns in those pages, and then tell you to copy those patterns. This raises a valid question: Does the tool help you rank, or does it simply tell you what the current winners are doing? Clearscope’s Bernard Huang famously noted that a low-to-mid correlation isn’t necessarily a “brag,” but it does prove one thing: these tools solve the retrieval problem, not the ranking problem. They get you into the “candidate set” (the top 1,000 results), but they don’t necessarily push you from position #8 to #1. Why not skip these tools altogether? If the correlation is weak and the logic is mechanical, why should professional writers use them? The answer lies in a psychological phenomenon called the “curse of knowledge.” MIT Sloan’s Miro Kazakoff describes this as the tendency for experts to forget what it was like to be a beginner. When expert writers create content, they often use internal

Uncategorized

SerpApi moves to dismiss Google scraping lawsuit

The Legal Battle Over the Open Web: SerpApi Challenges Google The landscape of the internet is currently being reshaped by a series of high-stakes legal battles concerning the right to access and collect public data. At the center of this storm is SerpApi, a popular service that provides developers and SEO professionals with structured data from search engine results pages (SERPs). In a significant development in the ongoing litigation between the tech giant and the data provider, SerpApi has officially moved to dismiss Google’s lawsuit. The motion, filed on February 20, marks a pivotal moment that could define the future of data scraping, the SEO industry, and the training of artificial intelligence models. SerpApi’s defense rests on a fundamental argument: Google is attempting to use copyright law as a weapon to maintain a monopoly over information that is already available to the public. By invoking the Digital Millennium Copyright Act (DMCA), Google seeks to penalize the automated collection of search results. However, SerpApi and its legal team argue that this is a gross misapplication of a law intended to protect creative works, not to gatekeep the public-facing components of a search engine’s advertising business. The Origins of the Conflict: Google’s Initial Complaint The legal friction between Google and SerpApi escalated into a full-scale court battle in December, when Google filed a lawsuit alleging that SerpApi was operating a sophisticated operation designed to “scrape and resell” Google’s search results. Google’s complaint focused heavily on the technical measures SerpApi uses to gather data. According to Google, SerpApi systematically bypassed its “SearchGuard” protections—a suite of bot-detection and crawling controls designed to prevent automated access to search pages. Google’s allegations were specific and technical. The search giant claimed that SerpApi utilized massive networks of rotating bot identities to mask its activity and mimic human behavior. By doing so, Google argued, SerpApi was able to ignore crawling directives (such as those found in robots.txt) and scrape licensed content from specialized search features. This content includes everything from high-resolution images to real-time data feeds, which Google claims are protected by intellectual property agreements and technical safeguards. From Google’s perspective, this isn’t just about data; it is about the integrity of its platform. Google invests heavily in bot detection to ensure that its servers are not overwhelmed by automated traffic and to protect the ad-supported ecosystem that funds its search engine. Google framed SerpApi’s business model as a parasitic enterprise that profits from Google’s infrastructure while actively subverting the rules of the road. SerpApi’s Response: Public Data is Not a Private Secret In the motion to dismiss filed by SerpApi CEO and founder Julien Khaleghy, the company strikes back at the core of Google’s legal theory. SerpApi’s primary contention is that Google is misusing the DMCA. Traditionally, the DMCA’s anti-circumvention provisions are used to protect copyrighted works—think of digital rights management (DRM) on a movie or a piece of software. SerpApi argues that a search engine results page, which is essentially a directory of links and snippets pointing to other websites, does not qualify as a copyrighted work in the same category. SerpApi asserts that it does not engage in “circumvention” as defined by the statute. They maintain that their service does not decrypt files, disable authentication protocols, or access any data that is not already visible to a standard user with a web browser. “SerpApi retrieves the same information available to any user in a browser, without requiring a login,” Khaleghy explained. In other words, if a human can see the data without needing a password, then an automated tool should be allowed to view it as well. Furthermore, SerpApi pointed to a perceived contradiction in Google’s own filing. Google’s complaint admitted that its anti-bot systems were designed to protect its advertising revenue and business model. SerpApi argues that protecting a business model is not the same as protecting a copyrighted work. If the technical barriers are there to protect ads rather than intellectual property, then the DMCA—a copyright law—should not apply. Legal Precedents and the “Information Monopoly” To bolster its motion to dismiss, SerpApi is leaning on established legal precedents that favor the open accessibility of public data. One of the most significant cases cited is the Ninth Circuit’s decision in hiQ v. LinkedIn. In that case, the court ruled that scraping publicly available data from LinkedIn profiles did not violate the Computer Fraud and Abuse Act (CFAA). The court warned against the creation of “information monopolies,” where companies could use technical or legal hurdles to claim exclusive ownership over data that they have already made public to the entire world. SerpApi also draws on the Sixth Circuit’s ruling in Impression Products v. Lexmark. While that case dealt with patent exhaustion, the underlying principle SerpApi is highlighting is that once a product (or in this case, content) is sold or made public, the creator loses certain rights to control its future use. SerpApi argues that public-facing content cannot be shielded by technical measures alone if the goal is to prevent the fair and open use of that data. These legal citations suggest that SerpApi is positioning itself as a defender of the “Open Web.” If a multi-trillion-dollar company like Google can use the law to prevent others from even looking at its public pages via automation, it could set a dangerous precedent for the entire internet ecosystem. The Broader Context: A Multi-Front War on Scraping The lawsuit from Google does not exist in a vacuum. It is part of a broader, escalating legal campaign against data scraping companies. Just months before Google’s suit, on October 22, Reddit filed a lawsuit against SerpApi, along with other firms like Perplexity and Oxylabs. Reddit’s complaint was even more pointed, alleging that these companies were scraping Reddit content indirectly through Google Search and then reselling or reusing it to train AI models. Reddit’s legal team went so far as to describe SerpApi’s operations as being on an “industrial scale” and claimed they had set a “trap” post. This

Uncategorized

The SEO’s guide to Google Search Console

Search Console is a free gift from Google for SEO professionals that tells you how your website is performing. It is the closest thing to X-ray vision we can get in an industry often shrouded in mystery and algorithmic shifts. Whether you are a seasoned SEO director or a business owner trying to make sense of your digital footprint, Google Search Console (GSC) is the primary source of first-party search truth. With data-packed amenities, SEO professionals can scavenge through GSC to locate stashes of hidden nuggets like clicks and impressions from search queries, Core Web Vitals, and whatever other surprises lie within your website’s technical architecture. Custom regex filters allow you to navigate through a million-page website with surgical precision, while automated reports keep you informed of your site’s health in real-time. While all SEO professionals hope to avoid any catastrophic SEO-related events—particularly with the rise of Google’s AI Overview (AIO)—the best defense is preparation. This guide is engineered to help your site withstand “zombie pages,” Helpful Content Update bloodbaths, core update mood swings, and AI Overview siphoning your clicks like a scene out of Mad Max: Search Edition. When the SEO industry gets dicey, this guide is exactly what you need to navigate the storm. What does Search Console do? And how does it help SEO? Google Search Console is a free website analytics and diagnostic tool provided by Google. Its primary purpose is to track your website’s performance in Google Search results. As Google continues to evolve, we expect GSC to eventually incorporate data from Gemini and “AI Mode,” but for now, it remains the gold standard for understanding how the world’s most popular search engine interacts with your content. For an SEO director, Search Console is a daily companion. It is used to monitor content performance, validate technical fixes, and track the growth of branded and non-branded queries. Most importantly, it helps prioritize strategy. By seeing exactly which queries drive traffic and which pages are failing to index, you can shift your resources toward the areas that will provide the highest return on investment. How do I set up Search Console? Getting set up on Search Console is quick and easy, though it may require some technical support from your web development team depending on your site’s configuration. To begin, you must have a Google account. Once logged in, navigate to https://search.google.com/search-console. If you do not see any profiles listed, you will need to add a “property.” Google offers two main types: a Domain property or a URL prefix property. Choosing the right one is essential for how your data is aggregated and reported. Domain property is the default recommendation A Domain property is the most comprehensive way to view your site. It includes all subdomains (like blog.example.com or shop.example.com), multiple protocols (both HTTP and HTTPS), and all path strings. It provides a holistic view of your website’s performance because it automatically groups the www and non-www versions of your site together. To set up a domain property, you enter your root domain (removing the HTTPS and any trailing slashes). Verification for a domain property is typically done via a DNS TXT record. This requires you to log in to your hosting provider (such as GoDaddy, Bluehost, or Cloudflare) and add a specific string of text provided by Google. If you have technical support, verifying through a CNAME record is another viable alternative. For ecommerce sites, setting up a domain property is particularly beneficial. It allows you to connect your data to the Google Merchant Center and set specific shipping and return policies. When paired with proper schema markup (Product + Offer + shippingDetails + returnPolicy), Google can read your store like a label, displaying price, availability, and delivery speed directly in the search results. URL prefix property allows you to dissect sections of a site A URL prefix property is more specific. it includes the exact protocol (HTTP vs HTTPS) and specific path strings. This is incredibly useful if you want to dive deep into a specific section of a website, such as a /blog/ subfolder or a specialized international directory like /uk/. Many SEOs choose to set up a domain property first for the big-picture view and then create individual URL prefix properties for subdomains or major subfolders. This allows for more granular troubleshooting and specialized reporting. For example, if you work with a customer support team, you can create a property specifically for the /help-center/ folder, allowing them to see exactly how their documentation is performing without sifting through marketing data. Key moments in history for Search Console Search Console has undergone a massive transformation since its inception. It has evolved from a simple diagnostic tool for webmasters into a sophisticated performance engine. Looking back at its history helps us understand the direction Google is heading. June 2005: Google Webmaster Tools was officially launched. May 2015: Google rebranded the service to Google Search Console to be more inclusive of all search professionals, not just “webmasters.” June 2016: Introduction of the mobile usability report as mobile search began to overtake desktop. September 2016: Improvements were made to the Security Issues report to help sites deal with malware and hacking. September 2018: A major update introduced the Manual Actions report, the “Test Live” feature, and extended historical data to 16 months. November 2018: Google began experimenting with the Domain properties we use today. June 2019: Mobile-first indexing features were added to reflect Google’s primary crawling method. May 2020: The Core Web Vitals report replaced the old speed report, emphasizing user experience (LCP, FID/INP, CLS). November 2021: A fresh design rollout made the interface more modern and accessible. September 2022: A new HTTPS report was launched to ensure site security. November 2022: The Shopping tab listings feature was added to help ecommerce brands track their visibility. September 2023: Merchant Center integrated reports were rolled out for deeper ecommerce insights. November 2023: A new robots.txt report was released to help debug crawling issues. August 2024: Search Console

Uncategorized

Content scoring tools work, but only for the first gate in Google’s pipeline

The Great Misconception: How Google Actually Sees Your Content Most SEO professionals and digital marketers give Google far too much credit. In our quest to create high-quality content, we often assume that Google’s algorithm understands our writing the same way a human editor does. We imagine a deeply intelligent AI reading our pages, grasping subtle nuances, evaluating the weight of our expertise, and rewarding “quality” in a vacuum. However, the reality revealed during the Department of Justice (DOJ) antitrust trial tells a much more mechanical—and perhaps less sophisticated—story. Under oath, Google VP of Search Pandu Nayak described a system that functions in stages. The first stage, known as retrieval, is built on inverted indexes and postings lists—traditional information retrieval methods that predate modern generative AI by several decades. Court exhibits from the remedies phase specifically referenced “Okapi BM25,” which is the canonical lexical retrieval algorithm that Google’s systems have evolved from over the years. This means the very first gate your content must pass through isn’t a complex neural network; it is a word-matching engine. While Google does deploy advanced AI further down the pipeline—including BERT-based models, dense vector embeddings, and entity understanding systems—these “expensive” computations only operate on a much smaller candidate set that the traditional retrieval stage produces. If your content doesn’t pass that first lexical gate, the advanced AI never even sees it. This is precisely where content scoring tools like Surfer SEO, Clearscope, and MarketMuse come into play, and why their methodology remains relevant despite the rise of AI-driven search. How First-Stage Retrieval Works and Why Content Tools Map to It To understand why content scoring tools work, you must understand Best Matching 25 (BM25). This is the retrieval function most commonly associated with Google’s initial screening process. As Pandu Nayak’s testimony highlighted, the mechanics involve an inverted index that scans postings lists to score topicality across hundreds of billions of indexed pages. This system narrows the field from billions to tens of thousands of candidates in a matter of milliseconds. For content creators, the mechanics of BM25 offer four critical takeaways that define how we should optimize our writing: Term Frequency with Saturation In the world of BM25, more isn’t always better. The first mention of a relevant term captures roughly 45% of the maximum possible score for that specific term. By the time you’ve mentioned it three times, you’ve reached about 71% of the scoring potential. However, the curve flattens aggressively after that. Going from three mentions to thirty adds almost nothing to your score. This “saturation” prevents keyword stuffing from being effective while rewarding the inclusion of a term at least once or twice. Inverse Document Frequency (IDF) Not all words are created equal. Rare, specific terms carry significantly more scoring weight than common ones. For example, in a query about running shoes, the word “pronation” is worth roughly 2.5 times more than the word “shoes.” This is because “shoes” appears on millions of pages, while “pronation” is specific to high-intent, expert-level running content. If you miss these rare but vital terms, your topicality score suffers disproportionately. Document Length Normalization BM25 and similar algorithms penalize longer documents for the same raw term count. Essentially, these scoring models look at term density relative to the total word count. This explains why almost every content tool on the market provides a recommended word count range; they are trying to help you maintain a density that the algorithm deems “natural” for a given topic. The Zero-Score Cliff This is perhaps the most important concept for SEOs to grasp. If a specific, relevant term does not appear in your document at all, your score for that term is exactly zero. You aren’t just ranked lower; for queries containing that term, you are effectively invisible. If you write a 5,000-word guide on “rhinoplasty” but never once mention “recovery time,” you are likely to score zero for the entire cluster of queries related to recovery, regardless of the quality of your prose. The Multi-Stage Pipeline: From Retrieval to Ranking It is helpful to visualize Google’s processing of a query as a funnel. Content optimization tools help you enter the top of the funnel, but they cannot guarantee you’ll come out the bottom as the number one result. After the first-stage retrieval (BM25) narrows the field, the pipeline gets progressively more expensive and sophisticated. The next stage often involves systems like RankEmbed (Neural Matching), which helps supplement lexical retrieval by surfacing pages that might have missed a specific keyword but are semantically related. Following this, a system known as “Mustang” applies over 100 different signals, including topicality, quality scores, and NavBoost. NavBoost is particularly powerful; it represents 13 months of accumulated click data, which Nayak described as “one of the strongest” ranking signals in Google’s arsenal. At the very end of the pipeline is DeepRank, which applies BERT-based language understanding. Because BERT models are computationally expensive, Google only runs them on the final 20 to 30 results. The practical implication for SEOs is clear: no amount of authority, brand power, or NavBoost “clicks” can help you if your page fails to pass the first gate. Content scoring tools are your ticket to the candidate set; what happens after that is a separate battle involving authority and user experience. What the Research on Content Tools Actually Shows There has been a great deal of debate regarding whether high scores in tools like Surfer or Clearscope actually lead to higher rankings. Several major studies have attempted to find a correlation. In 2025, Ahrefs conducted a study across 20 keywords, Originality.ai looked at approximately 100 keywords, and Surfer SEO analyzed 10,000 queries. All three studies reached a similar conclusion: there is a weak positive correlation between content scores and rankings, generally falling in the 0.10 to 0.32 range. While a 0.26 correlation might seem low, in the complex world of search, it is actually quite meaningful. However, these findings come with several caveats. First, most of these studies were conducted by the

Scroll to Top