Uncategorized

Uncategorized

AI-SEO Is A Change Management Problem via @sejournal, @Kevin_Indig

The Shift from Tactical Execution to Strategic Evolution The integration of Artificial Intelligence into the world of search engine optimization is often framed as a technical upgrade. We talk about prompt engineering, automated content clusters, and AI-driven keyword research. However, viewing AI-SEO through a purely technical lens is a mistake that can lead to catastrophic failure within an organization. As the search landscape evolves with the introduction of Generative AI and AI Overviews, the primary hurdle isn’t the technology itself—it is the human and organizational structure surrounding it. AI-SEO is, at its core, a change management problem. For years, SEO has operated under a relatively stable set of rules: create high-quality content, build authority, and optimize for specific ranking signals. AI disrupts this stability by changing how content is produced, how search engines understand intent, and how users interact with results. To succeed, businesses cannot simply “bolt on” AI tools to existing workflows. They must rethink their entire approach to digital growth, starting from the boardroom and extending to every level of the marketing department. Why AI-SEO Fails in the Boardroom The most common reason AI-SEO initiatives stall is a lack of alignment at the leadership level. When a marketing team proposes a massive shift toward AI-assisted content or automated technical SEO, the C-suite often reacts with hesitation. This hesitation is usually rooted in three main concerns: brand risk, legal uncertainty, and a lack of clear ROI benchmarks. Leadership often views AI as a potential liability. They hear stories of “hallucinations” where AI provides factually incorrect information, or they fear that search engines like Google will penalize AI-generated content. Without a clear strategy that addresses these risks, the boardroom will likely withhold the budget and resources necessary to scale. To bridge this gap, SEOs must move away from talking about “prompts” and start talking about “operational efficiency,” “market share protection,” and “competitive moats.” Change management requires translating the technical possibilities of AI into business outcomes. If you want executive buy-in, you must demonstrate how AI-SEO reduces the cost of customer acquisition or how it allows the company to enter new market segments that were previously too expensive to target manually. Without this high-level alignment, AI-SEO remains a “shadow project” that never gains the momentum needed to transform the business. Redefining Metrics for the AI Era One of the biggest challenges in managing the transition to AI-SEO is that our traditional metrics are becoming obsolete. For decades, the industry has relied on organic traffic, click-through rates (CTR), and keyword rankings. However, as Google integrates AI Overviews (formerly SGE), the way users consume information is changing. A user might get their answer directly on the search results page without ever clicking on a website. This “zero-click” reality means that traditional traffic metrics may decline even as brand influence increases. To manage this change, organizations need to develop new Key Performance Indicators (KPIs) that reflect the AI-driven search environment. These might include: Share of Model: How often is your brand cited as a source in AI-generated answers? Brand Sentiment in LLMs: How do Large Language Models (LLMs) like GPT-4 or Gemini describe your products and services? Conversion Efficiency: Instead of focusing on raw traffic, focus on the quality of the traffic that does reach the site, measuring whether AI-informed content leads to higher intent users. Cost per Published Asset: Measuring how AI improves the efficiency of the content pipeline. By shifting the metrics, you change the conversation. Instead of explaining why traffic is down, you are demonstrating how the brand is capturing the “mindshare” of the AI models that now mediate the relationship between the consumer and the information they seek. This is a critical component of change management: giving stakeholders a new way to visualize and measure success. The Ownership Dilemma: Who Runs AI-SEO? In a traditional setup, the SEO team handles keyword strategy, the editorial team handles writing, and the dev team handles technical implementation. AI blurs these lines. When an AI tool can generate code, write copy, and perform data analysis, who “owns” the output? This ambiguity often leads to internal friction, which is a hallmark of poor change management. Solving the ownership problem requires a cross-functional approach. Many successful organizations are moving toward a “Center of Excellence” model for AI. In this structure, a dedicated group defines the standards, tools, and ethical guidelines for AI use across the company, while individual departments execute within those guardrails. SEOs must evolve from being “executors” to being “orchestrators.” They are the ones who understand the intent of the user and the requirements of the search engine; they must now direct the AI to fulfill those needs while ensuring the editorial team maintains the brand’s unique voice. Furthermore, there is the “Human-in-the-loop” (HITL) requirement. Change management involves reassuring staff that AI is not a replacement but an augmentation. Defining clear roles for human oversight—such as fact-checking, brand alignment, and emotional resonance—ensures that the quality of the output remains high and the team remains engaged rather than threatened. Scaling Tactics Responsibly Once leadership is aligned and metrics are defined, the temptation is to “flood the zone” with AI content. This is where many companies fail. Scaling AI-SEO is not about quantity; it is about the strategic application of efficiency. If you use AI to produce 1,000 mediocre articles, you aren’t building an asset; you are building technical and editorial debt that will eventually be wiped out by a core algorithm update. A structured change management plan for scaling AI tactics should follow a phased approach: Phase 1: The Pilot Program Choose a specific niche or a subset of the website to test AI workflows. Use this phase to identify friction points. Does the AI struggle with the brand’s tone? Are the technical implementations slowing down the site? A pilot allows you to fail small and learn fast before committing the entire department’s resources. Phase 2: Workflow Integration Once the pilot proves successful, integrate the AI tools into the existing Project Management (PM) tools.

Uncategorized

The latest jobs in search marketing

The latest jobs in search marketing The search marketing landscape is currently undergoing one of the most significant transformations in its history. As we move further into 2026, the traditional boundaries between search engine optimization (SEO), pay-per-click (PPC), and generative AI search are blurring. For professionals looking to advance their careers, the current job market reflects this evolution, with a high demand for talent that can navigate both technical search algorithms and the emerging world of Answer Engine Optimization (AEO). Whether you are a seasoned director looking for an executive leadership role or a specialist aiming to sharpen your skills in a niche industry, the current openings represent a wide variety of opportunities. This week’s list includes high-level positions at established brands, innovative agencies, and fast-growing tech startups. Leading the Charge: Newest SEO Opportunities The role of an SEO professional has expanded far beyond simple keyword placement. Today’s specialists are expected to be masters of content strategy, technical site architecture, and data analytics. The following positions highlight the diversity of the current market, ranging from local retail growth to enterprise-level agency management. SEO Director at Upgrow (San Francisco, CA) Upgrow is looking for a creative and highly organized SEO Director to lead its digital marketing agency efforts in San Francisco. This is a high-impact leadership role that requires a balance of technical expertise and interpersonal management. The successful candidate will oversee diverse projects involving research, planning, analytics, and link-building. Importantly, this role involves managing two direct reports and maintaining direct client relationships, making it ideal for a strategist who enjoys both the technical and account management sides of the business. Digital Marketing Manager at MedEquip Shop (Houston, TX / Remote) For those interested in the intersection of healthcare and retail, MedEquip Shop is hiring a Digital Marketing Manager. This role focuses on scaling online and in-store sales for medical equipment. The position is heavily centered on SEO/SEM and content marketing, with a specific goal of increasing the brand’s footprint in the Houston area. It’s an excellent opportunity for a marketer who wants to see the tangible impact of their work on local community health services. SEO & Growth Marketing Specialist at Yami Yami, a leader in bringing Asian flavors and trending products to a global audience, is looking for an SEO and Growth Marketing Specialist. Having been named to the “Inc. 500 List” of fastest-growing startups, Yami offers an environment that is fast-paced and data-driven. The role involves connecting consumers with authentic food, beauty, and wellness products through sophisticated search strategies. This position is perfect for those who enjoy working at the intersection of e-commerce and cultural trends. SEO Executive at Urban Cruise The travel and transportation sector remains a competitive field for organic search. Urban Cruise is seeking an SEO Executive to drive traffic for niche services like bus rentals, event transportation, and city tours. This role is highly specialized, requiring thorough keyword research and implementation specifically tailored to group travel and local SEO tactics. SEO Specialist at Dollar Loan Center (Las Vegas, NV) Dollar Loan Center is looking for an on-site SEO Specialist at their Las Vegas headquarters. This role is focused on increasing organic traffic and improving search rankings for a large-scale financial services provider. Candidates will be responsible for supporting organic search efforts through site optimization and content strategy in a highly regulated industry. Advancing Performance: PPC and Paid Media Roles Paid media has become increasingly complex as automation and AI-driven bidding strategies take center stage. Companies are looking for performance marketers who can go beyond “setting and forgetting” campaigns, instead focusing on deep data analysis and multi-channel integration. B2B Performance Marketing Manager at My Amazon Guy (Remote) This remote position is designed for a results-driven strategist who can lead paid acquisition and demand generation efforts. The role requires high fluency in Google Ads and Meta, focusing on driving qualified leads and pipeline growth. For a performance marketer who thrives on data-backed strategies, this role offers the chance to scale revenue in the competitive Amazon agency space. Sr. Manager, Digital Paid Media at Sono Bello Sono Bello, a top cosmetic surgery specialist with over 185 board-certified surgeons, is seeking a Senior Manager for Digital Paid Media. This is a high-energy role within a dynamic environment. The candidate will be responsible for managing large-scale budgets and driving conversions for over 100 locations nationwide. It is a prime position for someone with a background in high-volume lead generation. Senior Manager / Assistant Director of Paid Media at Discovery Senior Living Discovery Senior Living is looking for a strategic leader to develop and optimize paid media programs across a massive portfolio of senior living communities. This role involves managing external agency relationships and ensuring performance excellence to accelerate occupancy growth. It requires a data-heavy approach to lead generation and a deep understanding of the customer journey in the senior care industry. Associate Manager, Performance Marketing at Versant Media Versant Media, a publicly traded company known for its iconic brands in news, sports, and entertainment, is hiring an Associate Manager. This role involves working with brands that shape cultural conversations. The focus is on using content and technology to connect audiences with the media they love, requiring a sophisticated understanding of audience segmentation and performance technology. The Evolution of Search: SEO, GEO, and AI Integration Perhaps the most exciting trend in the search marketing job market is the emergence of roles dedicated specifically to AI search. As consumers increasingly use LLMs (Large Language Models) and AI-driven search engines, brands are hiring specialists to master Generative Engine Optimization (GEO). SEO and AI Search Optimization Manager at Big Think Capital (New York) Big Think Capital is leading the way by hiring a dedicated SEO and AI Search Manager. This role is responsible for executing the company’s GEO strategy, ensuring the brand remains visible across AI search surfaces. With a salary of $100,000, this position involves optimizing website architecture for both traditional crawlers and modern AI models. Search Engine Optimization Manager at

Uncategorized

Microsoft Ads launches self-serve negative keyword lists

Enhancing Control in the Microsoft Advertising Ecosystem For digital marketers and search engine specialists, the ability to control exactly where their ads appear—and more importantly, where they do not—is a fundamental pillar of a successful campaign. Recently, Microsoft Advertising took a significant leap forward in empowering users by launching self-serve negative keyword lists. This move, confirmed by Microsoft Ads Liaison Navah Hopkins, marks a major shift in how advertisers interact with the platform’s interface, particularly regarding campaign exclusions and budget management. Historically, managing exclusions within certain campaign types on Microsoft Ads often required a multi-step process that sometimes involved reaching out to support teams. By moving to a self-serve model, Microsoft is not only streamlining the workflow for agencies and in-house teams but also bringing its platform closer to the level of autonomy found in competing services like Google Ads. This update is specifically impactful for those utilizing modern, automated campaign types where control has traditionally been more restricted. What Are Self-Serve Negative Keyword Lists? Negative keyword lists are a tool used in Pay-Per-Click (PPC) advertising to prevent ads from being triggered by specific search queries. If a business sells premium high-end hardware, for instance, they might want to exclude terms like “free,” “cheap,” or “discount” to ensure they aren’t paying for clicks from users who are unlikely to convert. These lists act as a filter, protecting the advertiser’s budget from irrelevant traffic and improving the overall Click-Through Rate (CTR). The “self-serve” aspect of this update means that these lists can now be created, edited, and applied directly through the Microsoft Advertising User Interface (UI) without the need for manual intervention from Microsoft support. This gives advertisers the agility needed to respond to real-time search trends and campaign performance data instantly. Key Features and Technical Specifications The rollout of self-serve negative keyword lists brings a specific set of functionalities designed for scale and efficiency. Understanding the technical boundaries and capabilities of these lists is essential for any advertiser looking to optimize their performance. The 5,000 Keyword Limit Each negative keyword list in the Microsoft Ads UI can now support up to 5,000 negative keywords. These are entered one per line, allowing for extensive filtering of unwanted traffic. For most medium-to-large accounts, a 5,000-keyword limit per list is substantial enough to cover broad categories of irrelevant queries, such as job seekers, researchers looking for free information, or competitors. Match Type Consistency One of the most important aspects of this update is how it handles match types. Microsoft has clarified that negative keywords in these lists function identically to how they do in traditional Search campaigns. However, advertisers must be precise with their formatting: Exact Match: Requires the use of brackets. For example, [free software] will only exclude that exact phrase with no additional words. Phrase Match: Requires the use of quotation marks. For example, “free software” will exclude any query that contains that exact sequence of words, even if other words precede or follow it. Important Note: Microsoft specifically notes that hyphens should not be used for match type formatting in this context, as they may not be recognized correctly by the system. Account and Campaign Level Application Flexibility is at the heart of this update. Advertisers can choose to apply these negative keyword lists at either the campaign level or the account level. Applying a list at the account level is particularly useful for “universal negatives”—terms that should never trigger an ad across any campaign, such as adult content or non-industry-related terms. Campaign-level lists allow for more surgical precision, tailoring exclusions to the specific goals of a single product line or service category. Why the Move to Self-Serve Matters The transition to a self-serve model is more than just a convenience; it is a strategic improvement for the Microsoft Advertising platform. It addresses several pain points that have frustrated advertisers for years. Eliminating Friction and Support Dependency Previously, implementing complex negative keyword strategies—especially within newer formats like Performance Max—often required a “wait-and-see” approach while waiting for support tickets to be processed. In the fast-moving world of digital advertising, a delay of 24 to 48 hours can result in hundreds or thousands of dollars in wasted spend on irrelevant queries. By making these lists self-serve, Microsoft has removed the middleman, allowing for instantaneous campaign adjustments. Improving Campaign ROI The primary goal of any PPC campaign is to maximize Return on Investment (ROI). Every dollar spent on an irrelevant click is a dollar that could have been spent on a high-intent user. By providing more robust tools to filter out “junk” traffic, Microsoft is helping advertisers improve their conversion rates and lower their Cost Per Acquisition (CPA). Better filtering leads to better data, which in turn leads to more effective machine learning and automation within the account. Empowering Performance Max Campaigns Performance Max (PMax) has been a significant focus for Microsoft Ads recently. While PMax offers incredible reach and automated optimization, it has often been criticized for being a “black box” with limited manual control. The introduction of self-serve negative keyword lists gives advertisers a vital “steering wheel” for PMax. It allows them to provide the necessary boundaries for the AI to work within, ensuring that automated bidding and targeting don’t stray into irrelevant territory. Strategic Implementation of Negative Keyword Lists Simply having the tool is not enough; knowing how to use it strategically is what separates top-tier marketers from the rest. With the new self-serve capabilities, advertisers should consider a multi-tiered approach to negative keyword management. Developing a “Master” Exclusion List Every account should have a master list of negative keywords that applies to almost every campaign. This list typically includes: Employment-related terms: “jobs,” “careers,” “salary,” “hiring.” Educational/Research terms: “definition of,” “what is,” “history of,” “class,” “tutorial.” Low-intent terms: “free,” “torrent,” “cheap,” “craigslist,” “ebay.” Competitor names: Depending on the strategy, you may want to exclude competitors to avoid expensive bidding wars for low-converting traffic. Categorical Lists for Product Diversity If an advertiser manages a large e-commerce store with multiple departments,

Uncategorized

Google publishes new Google Ads passkey help doc

The Evolution of Security in Digital Advertising In an era where digital assets are as valuable as physical ones, the security of online advertising accounts has become a paramount concern for businesses worldwide. Google Ads, the cornerstone of the digital marketing industry, has long been a target for malicious actors looking to hijack budgets, steal sensitive consumer data, and disrupt competitive landscapes. Recognizing the escalating sophistication of phishing attacks and account takeovers, Google has taken a significant step forward by publishing a comprehensive new help document dedicated to Google Ads passkeys. This move signals a broader shift within the tech giant’s ecosystem to move away from the traditional, vulnerable password-based systems and toward a more secure, “passwordless” future. For advertisers, this isn’t just a minor technical update; it is a fundamental change in how account integrity is maintained in an increasingly hostile digital environment. Understanding Passkeys: The End of the Password Era? To appreciate the importance of Google’s new documentation, one must first understand what a passkey actually is. Unlike a traditional password—a string of characters that can be guessed, stolen, or “phished”—a passkey is a digital credential tied to a specific device. It relies on the FIDO (Fast Identity Online) Alliance standards and uses public-key cryptography to authenticate users. When you create a passkey, your device generates a unique pair of keys: a public key that is shared with Google and a private key that stays securely on your device. During a login attempt, Google’s servers challenge your device to prove it has the private key. You verify your identity using your device’s existing biometric sensors (like a fingerprint or facial recognition) or a local PIN. Because the private key never leaves your device and is never sent over the internet, it is virtually impossible for a hacker to steal it remotely. The Core of the New Google Ads Help Documentation The newly released Google Ads documentation is designed to act as a roadmap for advertisers transitioning to this higher level of security. The document clarifies how passkeys function within the specific context of an advertising account, which often involves multiple users, varying levels of access, and significant financial stakes. Key highlights from the new documentation include: 1. Phishing Resistance The documentation emphasizes that passkeys are inherently phishing-resistant. Traditional two-factor authentication (2FA), such as SMS codes or even mobile app prompts, can still be intercepted or spoofed by sophisticated “man-in-the-middle” attacks. Passkeys eliminate this vulnerability because the authentication is bound to the specific website or app (ads.google.com), preventing users from accidentally “verifying” a login on a fraudulent clone site. 2. Mandatory Use for Sensitive Actions Perhaps the most critical piece of information in the new help doc is the clarification on when passkeys are required. Google is now mandating passkey or high-level authentication for “sensitive actions.” These include: Changes to user access levels (adding or removing administrators). Updates to account linking (such as connecting a YouTube channel or a CRM). Modifying sensitive billing information or payment methods. By requiring a passkey for these specific actions, Google ensures that even if a basic password is compromised, the most damaging changes to an account cannot be made without the physical device of an authorized user. 3. Device and Browser Requirements Google outlines the hardware and software prerequisites for using passkeys. Advertisers need to ensure their operating systems and browsers are up to date. This generally includes Windows 10 or later, macOS Ventura or later, iOS 16 or later, and Android 9 or later. Supporting browsers include Chrome, Edge, and Safari. Why Advertisers Should Prioritize Passkey Implementation The release of this documentation is timely. Over the past several years, the advertising industry has seen a sharp increase in account compromises. For a business, a hacked Google Ads account is a nightmare scenario. Attackers can quickly ramp up spending on fraudulent campaigns, deplete monthly budgets in hours, and gain access to proprietary keyword data and customer lists. Furthermore, the reputation damage can be long-lasting. If a compromised account is used to serve malicious ads, the business’s domain may be blacklisted by Google or other security filters, making it difficult to run legitimate campaigns in the future. By following the guidance in the new help doc, advertisers can proactively insulate themselves from these risks. Step-by-Step: Setting Up Passkeys for Google Ads While the new help document provides the official technical framework, the practical application for most advertisers is straightforward. To secure your Google Ads account with a passkey, the process generally follows these steps: Step 1: Access Your Google Account Security Settings Since Google Ads access is managed through your primary Google Account, the setup begins at the account level. Navigate to the “Security” tab of your Google Account profile. Under the “How you sign in to Google” section, you will find an option for “Passkeys and security keys.” Step 2: Create a Passkey Click on “Create a passkey.” Your browser will prompt you to use your device’s biometric authentication (TouchID, FaceID, or Windows Hello) or your device’s screen lock PIN. Once confirmed, your passkey is created and linked to that specific device. Step 3: Test the Login The next time you log in to Google Ads, the system will offer the option to “Use your passkey.” Simply use your biometric sensor, and you are logged in instantly. No password entry is required. Passkeys vs. Traditional MFA: A Security Comparison Many advertisers believe that having SMS-based Multi-Factor Authentication (MFA) is enough. However, the new Google documentation suggests otherwise. Here is how passkeys stack up against older methods: SMS Verification: Vulnerable to SIM swapping and social engineering. If a hacker convinces a carrier to move your number to their SIM card, they receive your codes. Passkeys are not tied to a phone number and cannot be intercepted this way. Authenticator Apps: These are more secure than SMS but still require the user to manually enter a code. Users can be tricked into entering these codes into a phishing site. Passkeys only work on the

Uncategorized

Google patent hints it could replace your landing pages with AI versions

The Evolution of Search: From Directing Traffic to Creating Destinations For decades, the fundamental agreement between Google and website owners has been straightforward: creators provide high-quality content, and in exchange, Google provides a gateway for users to discover that content. This symbiotic relationship built the modern web. However, a recently granted patent suggests that Google may be looking to move beyond the role of a simple intermediary. The patent, titled “AI-generated content page tailored to a specific user” (US12536233B1), outlines a future where Google doesn’t just send a user to your website; it creates a customized version of your website for them. This technology represents a significant shift in the Search Engine Results Page (SERP) philosophy, potentially replacing traditional brand-owned landing pages with AI-synthesized versions hosted or generated by Google itself. As the industry grapples with the rise of AI Overviews (formerly SGE), this patent introduces a new layer of complexity. It suggests that Google is considering ways to “fix” what it perceives as subpar user experiences on third-party sites by dynamically rebuilding those pages in real-time. Inside Patent US12536233B1: How the AI-Generated Page Works Filed roughly a year ago and granted in June 2024, the patent describes a sophisticated system involving machine-learned models designed to analyze both a user’s specific query and the content of a target organization’s website. The goal is to generate a bridge—a custom landing page—that perfectly aligns the user’s intent with the organization’s offerings. The technical mechanism relies on several key components: 1. The User Query and Context The process begins when a user enters a search query. Google’s system doesn’t just look at keywords; it analyzes the user account’s history and the specific context of the search to understand the underlying intent. 2. The Landing Page Score One of the most intriguing aspects of the patent is the “landing page score.” Before deciding to generate an AI version, the system evaluates the existing landing page of the organization that would naturally rank for the query. If the original page is deemed insufficient—perhaps because it’s too generic or requires too much manual navigation—it receives a lower score. 3. The Threshold Trigger If the landing page score exceeds a certain threshold—or conversely, if the AI’s predicted “improvement” score is high enough—Google generates an updated search result page. This updated page features a navigation link that leads the user not to the website’s original URL, but to the newly minted, AI-generated page. 4. Real-Time Dynamic Construction The AI-generated page is not static. It is built on the fly using data scraped from the organization’s site, structured data, and other available information. This page is designed to be a “tailored” experience, removing the friction of a user having to search through a website themselves. Practical Example: The “Wide Feet” Hiking Boot Scenario To understand the potential impact, consider a common consumer journey. A user searches for “waterproof hiking boots for wide feet.” In the current search environment, Google might show a link to a major retailer like REI or Amazon. When the user clicks that link, they are often taken to a general “Hiking Boots” category page. From there, the user must find the “Wide” filter, select “Waterproof,” and perhaps sort by price or rating. This is a multi-step process with multiple opportunities for the user to get frustrated and bounce back to Google. Under the system described in the patent, Google’s AI would recognize this friction. Instead of sending the user to the generic category page, it would generate a custom landing page. This page would look like a simplified version of the retailer’s site but would be pre-filtered to show only the waterproof boots available in wide sizes. It might even pull in specific reviews that mention “wide fit” and “waterproofing” to create a perfectly curated shopping experience, all before the user has even truly “entered” the retailer’s traditional site architecture. The “Terrifying” Prospect: Industry Reactions to the Patent The discovery of this patent has sent ripples through the SEO and digital marketing communities. Experts who have spent years optimizing landing pages for conversion and brand consistency see this as a potential threat to the direct relationship between brands and consumers. Search industry veterans like Glenn Gabe have noted that if users were frustrated by AI Overviews stealing “top of funnel” informational traffic, they will be even more concerned about AI-generated landing pages. Gabe remarked that Google could essentially create new landing pages if yours “isn’t good enough,” effectively acting as a gatekeeper for your own products. Lily Ray, another prominent voice in the SEO space, described the prospect as “terrifying.” The concern lies in the loss of control. A landing page is more than just a list of products; it is a brand’s digital storefront. It includes specific messaging, psychological triggers, and conversion elements that a brand has carefully tested. If Google’s AI strips those away in favor of a “cleaner” or “more efficient” layout, the brand’s identity could be diluted, and their ability to track user behavior through traditional analytics could be severely hampered. Joshua Squires, who highlighted the patent on LinkedIn after it was spotted by Brandon Lazovic, pointed out the “red flags” regarding how Google could rebuild the entire structure of a page dynamically. This isn’t just a summary; it’s a re-interpretation of a business’s digital presence. A New Metric: The Landing Page Score For SEOs, the most critical takeaway from this patent is the concept of a “Landing Page Score.” While Google has long used quality scores in advertising (Google Ads), this patent hints at a similar mechanism for organic search results. What would influence this score? Based on the patent’s logic, several factors are likely: Relevance to Long-Tail Queries Generic pages that try to rank for everything but specialize in nothing will likely score poorly. If your landing page doesn’t directly answer a specific facet of a query, Google may feel the need to “help” the user by creating a more specific version. User Experience and Friction Pages that

Uncategorized

OpenAI: ChatGPT now has 900 million weekly active users

The Unprecedented Growth of ChatGPT The landscape of the digital world has undergone a seismic shift as OpenAI officially announced that ChatGPT has surpassed 900 million weekly active users. This milestone marks a historic moment for the artificial intelligence industry, signaling that generative AI has moved beyond the “early adopter” phase and is now a core utility for nearly a billion people globally. When ChatGPT launched in late 2022, it reached 100 million monthly users faster than almost any application in history. However, reaching 900 million users on a weekly basis represents a different level of scale. It suggests that for a significant portion of the global internet-connected population, interacting with an AI model is now a habitual, daily, or weekly necessity. This growth has massive implications for how information is distributed, how brands are discovered, and how the traditional search engine market is evolving. Breaking Down the Numbers: Funding and Subscriptions The announcement of the 900 million user mark did not come in a vacuum. It was part of a larger update from OpenAI regarding its financial health and future scaling efforts. Alongside the user statistics, OpenAI revealed a new $110 billion funding round, a staggering figure that underscores the immense capital required to maintain and evolve the infrastructure behind large language models (LLMs). The financial details provide a clear picture of OpenAI’s transition from a research-focused lab to a commercial powerhouse. In addition to the massive free user base, the company reported: 50 Million Consumer Subscribers: These are individuals paying for ChatGPT Plus, providing a steady stream of recurring revenue that likely accounts for billions in annual turnover. 9 Million Paying Business Users: This figure includes users on ChatGPT Enterprise and ChatGPT Team plans. The adoption of AI in the workplace is no longer speculative; nearly ten million professionals are now using a paid, secure version of the tool to enhance productivity, code development, and content creation. This level of monetization is rare for such a young platform. It demonstrates that users are not just experimenting with the technology out of curiosity—they are finding enough value in it to justify a monthly subscription fee. How User Behavior is Fragmenting Beyond Traditional Search For decades, the journey of an online user began with a search engine—primarily Google. If you wanted to find a product, learn a fact, or solve a problem, you typed a query into a search bar and browsed a list of blue links. The rise of 900 million weekly active users on ChatGPT signifies that this monopoly on “discovery” is fracturing. We are seeing a shift toward “fragmented discovery.” Instead of visiting multiple websites to piece together an answer, users are increasingly asking ChatGPT to synthesize that information for them. Whether it is comparing the best hiking boots for wide feet or debugging a complex script, the AI interface offers a conversational efficiency that traditional search often lacks. However, the nature of this behavior is nuanced. Data suggests that while users start their journey with AI for brainstorming or initial research, many still return to traditional search engines for confirmation. This “confirmation loop” is a critical behavior for marketers to understand. A user might ask ChatGPT for a recommendation, but they will often “Google” that recommendation to read recent reviews, check live pricing, or ensure the AI hasn’t hallucinated the details. The Impact on SEO and Digital Marketing The rise of ChatGPT to 900 million weekly users necessitates a complete rethink of SEO strategy. If nearly a billion people are asking an AI for information, being “number one on Google” is no longer the only metric that matters. Digital marketers must now consider Generative Engine Optimization (GEO). In the world of AI-driven search, the goal is not just to rank for keywords, but to be included in the “knowledge graph” of the AI. When ChatGPT generates a response, it pulls from a vast training set and, increasingly, from real-time web browsing. If your brand, product, or expertise is not represented in the data that feeds these models, you effectively do not exist for a large segment of the market. Key areas where SEOs should focus include: 1. Authority and Citations AI models prioritize high-authority sources and consensus-driven information. To be cited by ChatGPT, your content must be recognized as a reliable source by the broader web. This reinforces the importance of high-quality backlinks and digital PR. 2. Direct Answers and Structured Data ChatGPT and other LLMs are designed to provide direct answers. Using clear, concise language and structured data (Schema markup) helps these models understand the context of your content, making it more likely to be summarized in a chat response. 3. Brand Visibility in Conversational Queries Queries in ChatGPT are often longer and more conversational than those in Google. Marketers should optimize for long-tail, natural language questions rather than just short-head keywords. Understanding the “intent” behind a conversation is more valuable than tracking “clicks” in the traditional sense. OpenAI’s Vision: Scaling AI for Everyone The announcement titled “Scaling AI for Everyone” highlights OpenAI’s ambition to make artificial intelligence a ubiquitous layer of human life. With $110 billion in new funding, the company is signaling that it is prepared to invest heavily in the hardware, data centers, and talent required to push toward Artificial General Intelligence (AGI). The scale of 900 million users suggests that OpenAI is nearing a level of cultural penetration similar to that of Facebook or YouTube. At this scale, the platform becomes an ecosystem. We are already seeing the emergence of the GPT Store, where developers can build custom versions of ChatGPT for specific tasks. This creates a “sticky” environment where users stay within the OpenAI ecosystem to solve various problems, rather than jumping between different apps and websites. The Rise of Business and Enterprise Adoption The report of 9 million paying business users is perhaps the most significant stat for the tech industry. It indicates that the initial security concerns surrounding LLMs are being addressed. Companies are now comfortable enough

Uncategorized

You can now build PPC tools in minutes with vibe coding

The Revolution of Vibe Coding in Digital Marketing The landscape of digital advertising is undergoing a seismic shift. For decades, the barrier between a great idea and a functional tool was the ability to write code. If you were a Pay-Per-Click (PPC) specialist with a vision for a custom script or a specialized dashboard, you generally had two choices: learn JavaScript or wait weeks for a developer to prioritize your request. That era is officially ending. We have entered the age of “vibe coding,” a paradigm shift where natural language and intent take precedence over syntax and semicolons. Frederick Vallaeys, a veteran of the industry who spent a decade at Google building foundational tools like the Google Ads Editor and another ten years as the CEO of Optmyzr, recently highlighted this transformation. According to Vallaeys, the release of advanced models like GPT-5 and the maturation of AI coding assistants mean that custom PPC tools can now be built in minutes, not months. This isn’t just a marginal improvement in productivity; it is a fundamental redesign of how digital marketers interact with technology. Understanding the Traditional Scripting Problem To appreciate where we are going, we must look at where we have been. Automation has always been the “holy grail” for PPC managers. Whether managing thousands of keywords or adjusting bids across dozens of accounts, there is always more work than there are hours in a day. For years, Google Ads Scripts were the primary solution. These scripts allowed users to automate repetitive tasks, pull custom reports, and bridge the gap between manual management and full-scale software. However, traditional scripting has a significant bottleneck: the technical barrier. In many industry presentations, Vallaeys asks audiences how many of them actually write their own scripts. Typically, only three to five out of 100 people raise their hands. The remaining 95% are “copy-pasters”—they find a script online, tweak a few variables, and hope it doesn’t break. While this approach provides some utility, it prevents marketers from implementing their “secret sauce.” You are forced to use someone else’s logic rather than building a tool that perfectly fits your specific business needs or client requirements. What is Vibe Coding? Vibe coding is the process of building software by describing what you want in plain English. Instead of focusing on the mechanics of the code—loops, variables, and API calls—you focus on the “vibe” or the intent of the application. You talk to the AI like you would a human developer, and the AI handles the technical implementation in the background. This goes beyond simple code snippets. With the advent of GPT-5 and multimodal AI, you can now provide a sketch on a napkin or a whiteboard flowchart of a campaign decision tree. The AI analyzes the image, understands the logic, and generates a fully functional program. This capability moves us away from Software-as-a-Service (SaaS) and toward a world of “on-demand software.” If you need a tool for a task that will only take you 90 minutes to do manually, it is now worth it to build a piece of “throwaway software” that automates it in five minutes. The Evolution from Deterministic to Probabilistic Logic One of the most profound changes vibe coding brings to PPC is the shift from deterministic to probabilistic logic. Traditional code is deterministic; it follows strict “if/then” rules. For example, if you wanted to write a script to identify competitor keywords in a search term report, you would have to manually list every possible competitor name and every variation thereof. If a new competitor entered the market or a user made a typo, the script would likely miss it. Vibe coding utilizes Large Language Models (LLMs) which are probabilistic. They understand nuance and context. You can ask an AI-built tool, “Is this search term likely a competitor?” and the LLM can make an informed judgment based on its training data. It doesn’t need a hard-coded list; it understands the intent behind the query. This allows for much more sophisticated automation that can handle the “grey areas” of digital marketing that previously required human oversight. A New Workflow: From Months to Minutes The old way of building internal tools or client-facing dashboards was notoriously slow and expensive. The process usually looked like this: 1. Writing Specifications You would spend days or even weeks drafting a detailed technical requirements document. You had to anticipate every edge case and explain exactly how the data should flow from Point A to Point B. 2. Engineering and Development You would hand the specs to a developer who would spend weeks building the first version. There was often a “lost in translation” effect where the final product didn’t quite match the original vision. 3. QA and Bug Fixing You would find bugs, schedule follow-up meetings, and iterate. By the time the tool was ready for deployment, the market conditions or the client’s needs might have already changed. The vibe coding workflow turns this on its head. Now, you can write a one-paragraph specification in five minutes. You feed that into an AI tool, which builds the software in about 15 minutes. You then spend three to five minutes per iteration, telling the AI to “add a button here,” “change this calculation,” or “make it look more professional.” In under an hour, you have a functional, high-quality tool. Case Studies: Vibe Coding in Action To demonstrate the power of this new approach, Vallaeys shared several examples of tools built using vibe coding in record time. These weren’t just simple scripts; they were interactive web applications and functional browser extensions. The Persona Scorer Using a tool called Lovable, Vallaeys built a persona scorer for ads. He prompted the AI: “Build me a persona scorer for an ad that shows how well it resonates with five different audiences.” In less than 20 seconds, the AI provided a design vision and an initial build. He was then able to immediately iterate, asking the AI to expand the scope to ten audiences instead of

Uncategorized

How to build a context-first AI search optimization strategy

The landscape of digital discovery is undergoing a fundamental transformation. For decades, Search Engine Optimization (SEO) was largely defined by a “keyword-string-first” mentality. Success was measured by how effectively a creator could match specific words in a query to specific words on a page. However, the rise of Large Language Models (LLMs) and generative AI has ushered in a new era where context, semantics, and intent take center stage. AI-based discovery offers a level of sophistication that traditional algorithms could only hint at. Instead of merely scanning for keywords, modern search systems and AI assistants aim to understand the “semantic environment” of a piece of content. Optimization is no longer just about reinforcing a primary keyword; it is about constructing a retrievable, high-density environment of meaning around that topic. This shift impacts every facet of content creation, from initial research and site architecture to the final word on the page. To succeed in this new environment, brands and publishers must move beyond traditional keyword lists and embrace a context-first strategy. This means prioritizing how information is structured, how concepts are linked, and how clearly a page answers the underlying intent of a user. Whether you are writing every word manually or utilizing automated workflows, understanding the mechanics of contextual optimization is essential for long-term visibility. Reframing your publishing strategy around context The concepts of context, semantics, and intent have been part of the SEO conversation for years. Concepts like Latent Semantic Indexing (LSI) were early attempts to describe what we now see fully realized in AI search. However, the difference today lies in the execution and the platform. We are no longer just optimizing for a search engine results page (SERP); we are optimizing for LLM-based discovery engines that “read” and “summarize” content in real-time. If you are already operating with a context-first mindset, you are likely ahead of the curve. You focus on topics rather than just terms. But for those still rooted in keyphrase-first approaches, a pivot is required. This transition involves reframing your entire publishing strategy. It affects how content is categorized, how site taxonomy is built, and how schema is applied. One of the most significant changes is the move away from verbosity for the sake of word count. In the past, “longer was better” because it provided more opportunities to hit keyword variations. In the age of AI, getting to the point matters more. AI models value “information density.” Content that provides clear, concise answers within a rich contextual framework is more likely to be retrieved and cited by an AI. This benefits both the machine layer, which needs to process information efficiently, and the human reader, who wants immediate value. Keywords have not become obsolete, but they have evolved. They are no longer isolated tactics; they are the anchors for broader themes. A context-led strategy requires a more holistic view of what your content represents and how it connects to the broader knowledge graph of your industry. Structure for a contextual-density approach To build a context-first strategy, we must view the primary keyphrase as a multidimensional axis point. Rather than seeing a topic as a single phrase, we should view it as a “semantic field.” This field is composed of several layers that provide the necessary depth for an AI to recognize the content’s authority and relevance. A comprehensive framework for contextual density includes several key areas: Axis Term: The primary topic or core keyphrase that serves as the center of the content. Structural Context: The secondary and tertiary concepts that define the boundaries of the topic. Problem Context: The specific intent or “pain point” the user is trying to solve. Linguistic Variants: Naturally fanned-out phrasing, including synonyms and stemmed variations. Entity Associations: Links to known people, places, brands, or established concepts within the field. Retrieval Units: Content organized into “chunks” that are easy for an LLM to process and summarize. Structural Signals: The use of internal linking, schema markup, and logical taxonomy to signal meaning. While the axis term remains the anchor, the “other” words—the headings, the subheadings, and the references to related concepts—are what truly define performance. An AI evaluates the sum of these parts to determine if a page is a comprehensive resource or just a thin attempt at keyword matching. This is the essence of contextual density: providing a rich environment where the primary topic is supported by a network of related information. Context density and SERP-level linguistic analysis One of the most effective ways to understand contextual density is through SERP-level linguistic analysis. This approach involves analyzing the top-performing results for a given topic to identify the common linguistic patterns and entities they share. This isn’t just about looking at what keywords they use, but identifying the “supporting vocabulary” that search engines associate with a high-quality answer. This concept isn’t entirely new. As far back as 2016, platforms like Searchmetrics, led by Marcus Tober, began offering tools that scraped the top results for a keyword and weighted the specific words and entities common across those high-ranking pages. These tools provided a roadmap for “hyper-context,” showing creators exactly which modifiers and related concepts were necessary to appear authoritative. Modern tools like Clearscope and others have refined these methods, using advanced algorithms to suggest the semantic indicators that yield the best content performance. In competitive niches, this level of analysis is often the difference between ranking on page one and being buried in the archives. When you include the specific entities and linguistic modifiers that an AI expects to see within a certain topic, you are speaking the “language” of the algorithm. Using secondary and tertiary keyphrases as contextual linguistic struts Once you understand the broader semantic field, you can begin to construct your content using “linguistic struts.” These are your secondary and tertiary keyphrases. They shouldn’t be viewed as items to be checked off a list, but as structural elements that support the weight of your primary topic. Think of secondary keywords as context stabilizers. They help define the

Uncategorized

Local GEO & AI Search: A 90-Day Plan to Make Every Location AI-Ready via @sejournal, @hethr_campbell

The Evolution of Local Search: From Traditional SEO to Generative Engine Optimization The landscape of local search is undergoing its most significant transformation since the introduction of the smartphone. For years, multi-location brands relied on a familiar playbook: optimize Google Business Profiles, manage local citations, and build backlinks to landing pages. While these tactics remain essential, the rise of Artificial Intelligence (AI) and Generative Engine Optimization (GEO) has introduced a new layer of complexity. AI-powered search engines like Google’s Search Overviews, ChatGPT, and Perplexity do not just rank websites; they synthesize information to provide direct answers. To remain visible, every location in a brand’s network must be “AI-ready.” This means ensuring that AI models—Large Language Models (LLMs)—can easily find, understand, and trust the data associated with each physical storefront. If an AI cannot verify your business hours, services, or reputation across multiple sources, it simply won’t recommend you to the user. This 90-day plan is designed to bridge the gap between traditional local SEO and the future of AI-driven discovery. Phase 1: Days 1–30 – Establishing the Source of Truth The first 30 days are dedicated to data hygiene and foundational structure. AI models thrive on consistency. If your location data is fragmented or contradictory, LLMs will assign a lower confidence score to your brand, leading to reduced visibility in AI-generated responses. Comprehensive Data Audit Start by auditing every single location in your portfolio. This involves more than just checking addresses and phone numbers. You must ensure that the Name, Address, Phone (NAP), and Website URL are identical across all primary platforms. For multi-location brands, this is often where the first breakdown occurs. Small discrepancies, such as “Suite 100” vs. “#100,” can confuse older algorithms and create friction for AI models trying to verify entity relationships. Optimizing the Primary Local Ecosystem While Google Business Profile (GBP) remains the heavyweight, AI-ready brands must look beyond a single platform. Models like Apple’s Siri and specialized AI tools pull heavily from Apple Business Connect. Similarly, Microsoft’s Copilot relies on Bing Places. During this first month, ensure that every location is claimed, verified, and fully populated on these three core platforms. Pay special attention to categories; AI uses these to understand the “entity” of your business. Be specific—if you are a “Vegan Italian Restaurant,” do not simply settle for “Restaurant.” Advanced Schema Markup Implementation Schema markup is the language of AI. It provides the structured data that allows search engines to understand the context of your content without needing to guess. For local locations, you must implement specific JSON-LD Schema, including LocalBusiness, Store, or ProfessionalService types. Ensure your code includes coordinates (latitude and longitude), social media profiles (sameAs), and specific service offerings. This creates a “Knowledge Graph” for your brand that AI agents can easily parse. Phase 2: Days 31–60 – Content Strategy for Generative Engines Once the foundation is solid, the focus shifts to content. Unlike traditional search, where keywords were king, AI search prioritizes entities and context. During month two, the goal is to provide the “why” and “how” behind each location. Developing Location-Specific Helpful Content Generic, templated pages for 50 different locations will no longer suffice in an AI-driven world. AI models are trained to prioritize “helpful content” that demonstrates first-hand experience and expertise. For each location, create unique content that highlights its relationship with the local community. This might include information about local parking, nearby landmarks, or specific community events the business sponsors. This local relevance helps AI engines associate your brand with a specific geographic “entity.” Entity-Based Optimization AI search doesn’t just look for strings of text; it looks for things (entities). To make a location AI-ready, you must link it to other high-authority entities. For example, if a clinic is located near a major university, mention that relationship. If a retail store carries specific high-authority brands, list them. This creates a web of associations that allows an LLM to understand exactly where your business fits within the local ecosystem. Focusing on Conversational Queries Users interact with AI differently than they do with a search bar. They ask questions like, “Where is the best place to get a quick healthy lunch near the convention center?” Your content strategy should reflect this shift. Use H2 and H3 headings to answer specific questions. Incorporate a localized FAQ section for every location page, addressing common customer pain points and inquiries. By mirroring the natural language used in AI prompts, you increase the likelihood of being the featured answer. Phase 3: Days 61–90 – Building Authority and Monitoring Visibility The final phase is about validation and performance tracking. AI models prioritize information that is corroborated by third parties. You must prove to the AI that your business is a trusted authority in the real world. Aggressive Review Management and Sentiment Analysis Reviews are one of the most significant signals for AI trust. However, AI doesn’t just look at the star rating; it analyzes the sentiment and the keywords within the reviews. Encourage customers to be specific in their feedback. A review that says “The deep-dish pizza at this Chicago location was incredible” is far more valuable for GEO than one that just says “Great service.” Use this period to respond to all reviews—both positive and negative—as this activity signals to AI engines that the business is active and responsive. Local Link Building and Citations 2.0 Traditional citations (Yelp, Yellow Pages) still matter for verification, but “Citations 2.0” focuses on local digital PR. AI models look for mentions in local news outlets, neighborhood blogs, and chamber of commerce sites. Aim for high-quality, local mentions that link your brand to the community. These external “votes of confidence” act as corroborating evidence for the data you’ve provided in your Schema markup. Monitoring AI “Share of Voice” The metrics of success are changing. While you should still track organic rankings, you must also begin monitoring your “AI Share of Voice.” Use tools that track citations within Google Search Overviews or Perplexity. Are your locations being recommended

Uncategorized

The dark SEO funnel: Why traffic no longer proves SEO success

Search engine optimization is currently undergoing its most radical transformation since the inception of the commercial web. For decades, the industry operated on a linear, predictable model: rank for a keyword, earn a click, and attempt to convert that visitor into a customer. This was the era of the transparent funnel, where every step of the buyer’s journey was visible within the confines of Google Analytics and Search Console. Today, that model is fundamentally broken. SEO is transitioning from a discipline of clicks and rankings to one of ingestion and recommendation. We have entered the age of the “dark SEO funnel.” In this new paradigm, traditional top-of-funnel (TOFU) traffic is collapsing as users find answers directly within AI interfaces. The “messy middle” of the buyer’s journey has become even more opaque, and for the first time in history, a successful SEO strategy might actually result in a decrease in total website traffic. If your organization is still using raw session counts as the primary KPI for SEO success, you are optimizing for a digital ecosystem that no longer exists. The Collapse of the Traditional Search Funnel The traditional search funnel was built on the premise that Google was the starting point for every inquiry. Whether a user was looking for a broad definition of a concept or a specific product comparison, they began at a search bar, clicked a blue link, and landed on a website. This provided marketers with a clear trail of breadcrumbs to follow. However, recent data suggests that the discovery phase has moved into “dark” territory. According to research from Wynter, 84% of B2B buyers now utilize AI tools for vendor discovery. More strikingly, 68% of these buyers initiate their search process within AI platforms—such as ChatGPT, Claude, or Perplexity—before they ever consider visiting Google. This shift represents a massive migration of search intent away from trackable web environments and into the “black box” of Large Language Models (LLMs). When a buyer asks an AI to “compare the top five CRM platforms for mid-market manufacturing companies,” they receive a synthesized recommendation without ever visiting the websites of those five companies. The discovery happens, the evaluation occurs, and the shortlisting is completed—all before a single click is registered in your analytics dashboard. This is the dark SEO funnel: a world where discovery is invisible and attribution is nearly impossible to solve with traditional tools. Defining the Dark SEO Funnel To understand the dark SEO funnel, we must look at its predecessor: dark social. In the world of social media, “dark social” refers to the private sharing of content through channels like Slack, WhatsApp, and email—places where tracking pixels cannot reach. A peer recommends a tool in a private community, and the recipient later searches for that brand directly. The original source of the lead remains hidden. Dark SEO follows an algorithmic version of this pattern. Instead of a peer making the recommendation in a DM, an LLM makes the recommendation based on its training data. The process typically follows three distinct, largely untraceable stages: 1. Ingestion The first stage is where the LLM consumes your content. This happens during the training phase or through real-time web crawling (like GPT-4o or Perplexity). The AI doesn’t just index your keywords; it understands your brand as an “entity.” It maps your features, your reputation, and your authority relative to specific problem sets. This stage is completely invisible to SEOs. There is no “crawl report” that tells you how well an LLM has “understood” your brand’s unique value proposition. 2. Recommendation The second stage occurs when a user asks a problem-aware question. Unlike a traditional search query like “best marketing software,” these prompts are often long, nuanced, and highly specific. The LLM processes the user’s requirements and recommends your brand as a specific solution. This interaction occurs within the AI interface. No traffic is sent to your site yet, but the seed of a buying decision has been planted. 3. Verification The final stage is where traditional SEO metrics finally catch a glimpse of the activity—but they often misinterpret it. Once the AI has narrowed down the options, the user moves to Google to verify the choice. They might search for “[Brand Name] reviews,” “[Brand Name] pricing,” or “[Brand Name] vs [Competitor].” When they eventually click through and convert, the credit is attributed to “branded search” or “direct traffic.” The reality, however, is that the SEO work (ensuring the brand was prominent in the AI’s training data) was what fueled the conversion. The New Role of Search Engines: From Discovery to Verification The fundamental role of Google is shifting from a discovery engine to a verification engine. As one CMO noted in the Wynter study: “I use Google only if I have certainty about which specific software types or products I want.” This sentiment highlights a radical shift in user behavior that will define marketing strategies through 2026 and beyond. AI is now for evaluating options, weighing pros and cons, and narrowing down a list of candidates. Google is used to validate those choices. This means that while top-of-funnel traffic for broad, informational keywords is drying up, the value of the traffic that remains is actually increasing. The visitors who do reach your site are further down the funnel and have a higher intent to buy. However, because they are skipping the traditional “discovery” pages on your site, your total traffic numbers will likely look lower than they did in previous years. The Strategic Shift: Brand Mentions vs. LLM Citations To succeed in the era of the dark funnel, marketers must shift their focus from optimizing for blue links to optimizing for inclusion. Inclusion in the AI-driven world happens through two primary mechanisms: brand mentions and URL citations. The Power of Brand Mentions and Entity Strength In traditional SEO, we focused on backlinks to pass “juice” or authority. In the dark SEO funnel, we focus on entity strength. This is a measure of how frequently and authoritatively your brand name

Scroll to Top