Author name: aftabkhannewemail@gmail.com

Uncategorized

44% of ChatGPT citations come from the first third of content: Study

The Shift from Traditional Search to AI Retrieval For decades, search engine optimization (SEO) was defined by a specific set of rules: keywords, backlinks, and comprehensive “ultimate guides.” Digital publishers were encouraged to create long-form content that kept users scrolling, believing that the longer a reader stayed on a page, the more authoritative that page appeared to search engines like Google. However, the rise of Large Language Models (LLMs) and AI-driven search engines like ChatGPT and Perplexity is fundamentally altering the anatomy of successful content. A groundbreaking new study suggests that if your most valuable insights aren’t at the very top of your page, they might as well not exist for the AI. Growth Advisor Kevin Indig recently spearheaded an exhaustive analysis of how ChatGPT interacts with web content. By examining 1.2 million AI-generated answers and 18,012 verified citations, Indig’s team identified a clear, statistically indisputable bias in how AI “reads” and credits its sources. The core finding is startling: nearly half of all citations generated by ChatGPT come from the first third of a given article. This “front-loading” phenomenon represents a massive shift in how writers and marketers must approach content structure if they want to remain visible in an AI-dominated ecosystem. The Data: The “Ski Ramp” Citation Pattern The study reveals what Indig calls a “ski ramp” pattern of citation. Rather than scanning an entire article with equal weight, ChatGPT prioritizes information based on its placement on the page. The numbers provide a clear roadmap of AI attention spans: The First Third: 44.2% of all citations are pulled from the first 30% of the content. The Middle Section: 31.1% of citations come from the middle 30% to 70% of the article. The Final Third: Only 24.7% of citations come from the end of the content, with a significant “drop-off” occurring as the reader approaches the footer and navigation elements. This data suggests that the “delayed payoff” strategy—where a writer builds a narrative and delivers the “punchline” or the most valuable conclusion at the end—is an active disadvantage in the era of AI retrieval. While humans might appreciate a well-paced story, AI is looking for immediate classification and direct answers to feed into its response engine. Deep Dive: How AI Reads at the Paragraph Level While the high-level data shows a preference for the top of the page, the way AI parses individual paragraphs is slightly more nuanced. Indig’s team used sentence-transformer embeddings to match ChatGPT’s responses to specific source sentences, revealing that AI doesn’t just look at the first sentence of a paragraph and move on. At the paragraph level, the distribution of citations looks like this: The Middle: 53% of citations are pulled from the middle sentences of a paragraph. The Opening: 24.5% come from the first sentence. The Closing: 22.5% come from the final sentence. This suggests that while the AI wants its topics early in the article, it looks for density and context within the paragraph itself. The middle of a paragraph is often where a writer explains a concept, provides a statistic, or adds the necessary detail that gives an answer its substance. For content creators, this means that the first sentence should establish the topic clearly, but the “meat” of the information should be tightly packed in the sentences that follow. The DNA of a Cited Passage: Five Key Traits Positioning is only half the battle. The study also isolated the linguistic traits that make a specific sentence or paragraph “cite-worthy” in the eyes of an LLM. By comparing highly cited passages with those that were ignored, Indig identified five distinct characteristics of winning content. 1. Definitive and Declarative Language AI models prefer certainty. Passages that were cited were nearly twice as likely to use clear, definitive language such as “X is” or “X refers to.” When an LLM is trying to provide an answer to a user, it searches for sentences that provide a direct subject-verb-object relationship. Vague framing, rhetorical questions that aren’t immediately answered, and overly flowery prose act as “noise” that the AI tends to filter out in favor of clear definitions. 2. The Conversational Q&A Structure The study found that cited content was twice as likely to include a question mark. Interestingly, 78.4% of citations tied to questions originated from headings (H2 or H3 tags). In the workflow of an LLM, a heading often functions as a “prompt,” and the following paragraph is treated as the “answer.” By structuring your article with headings that mirror the questions users are actually asking, you are essentially pre-formatting your content for the AI to digest and cite. 3. High Entity Density In linguistic terms, an “entity” is a specific brand, person, place, tool, or unique concept. Standard English text usually contains between 5% and 8% proper nouns. However, the text cited by ChatGPT averaged an entity density of 20.6%. Specificity is the anchor of AI retrieval. Using specific names and technical terms reduces ambiguity, making it easier for the model to verify that the information is relevant to the user’s query. 4. Balanced Sentiment and “Analyst” Tone The tone of a cited passage matters. The study measured subjectivity on a scale where 0 is a dry fact and 1 is a purely emotional opinion. Cited text consistently clustered around a subjectivity score of 0.47. This is the “Goldilocks zone” of content—it is neither a boring list of raw data nor an overly biased marketing pitch. It resembles “analyst commentary,” providing a mix of objective facts and professional interpretation. This balanced tone builds the perceived “authority” that the AI is looking for when sourcing information. 5. Business-Grade Readability Clarity wins over complexity. The study used the Flesch-Kincaid grade level to measure readability. Content that was frequently cited had an average grade level of 16 (equivalent to a college senior), whereas lower-performing content averaged a much denser 19.1. While the AI is capable of understanding complex academic prose, it prioritizes efficiency. Shorter sentences and a plain, logical structure allow the model to process

Uncategorized

Google Ads adds Results tab to show impact of applied recommendations

Introduction to the Google Ads Results Tab For years, Google Ads users have navigated a complex relationship with the platform’s “Recommendations” section. While these automated suggestions are designed to improve performance and optimize account health, many digital marketers and business owners have remained skeptical. The primary concern has always been a lack of transparency: how do we know if these changes actually drive growth, or if they simply increase ad spend? Google is now addressing this transparency gap with the introduction of a dedicated “Results” tab within the Recommendations interface. This new feature represents a significant shift in how Google interacts with its advertisers. Instead of merely projecting potential gains, the Results tab allows advertisers to see the measured performance impact after they apply specific bid and budget suggestions. By providing a retrospective look at performance, Google is offering a layer of accountability that has been missing from its automated ecosystem. This allows performance marketing teams to evaluate the actual business value of the platform’s guidance rather than relying on faith in the algorithm. The Evolution of Google Ads Recommendations To understand the importance of the Results tab, one must look at the history of Google’s Optimization Score and its accompanying recommendations. Historically, Google has used these tools to encourage advertisers to adopt new features, increase budgets, or broaden their targeting. For the advertiser, these suggestions often felt like a “black box.” You could see an increase in your Optimization Score, but correlating that specific change to a direct increase in Return on Ad Spend (ROAS) was often a manual and tedious process of comparing date ranges and campaign logs. The Results tab changes the narrative. It moves the conversation from “what might happen” to “what did happen.” This update is currently being rolled out as an early pilot, as confirmed by Google, after being spotted by industry experts like Hana Kobzová, founder of PPCNewsFeed. It marks a move toward a more data-driven partnership between the advertiser and the AI-driven automation that now powers much of the Google Ads environment. How the Results Tab Works: Measuring Incremental Lift The core functionality of the Results tab centers on the concept of incremental lift. When an advertiser applies a recommendation—specifically regarding budgets or bidding targets—the system does not simply look at the raw data in a vacuum. Instead, it employs a sophisticated analysis to determine the true impact of that change. After a recommendation is applied, Google waits for a period of one week. This “cooling off” or learning period is essential because bidding algorithms often need time to recalibrate after a change is made to a target CPA (Cost Per Acquisition) or a daily budget. Once this week has passed, Google analyzes the campaign’s performance and compares it to an estimated baseline. This baseline represents a projection of what the campaign’s performance likely would have been had the recommendation never been applied. The system then highlights the delta between the actual performance and the baseline. This delta is the incremental lift. For example, if you raised your budget on a Search campaign, the Results tab might show that this change generated 15 additional conversions that you wouldn’t have received otherwise. By focusing on these incremental gains, Google provides a clearer picture of whether the extra spend was justified by the resulting volume. Where to Find and Navigate the Results Tab Finding the new data is relatively straightforward, as Google has integrated it directly into the existing Recommendations workflow. Advertisers can find impact reporting within the Recommendations area of their account. There are two primary ways the data is presented: The Summary Callout On the main Recommendations page, Google provides a high-level summary callout. This serves as a quick glance for account managers to see the overall impact of recent changes. It highlights the most significant wins and provides a “at-a-glance” view of how recently applied recommendations are contributing to the account’s primary goals. The Dedicated Results Tab For those who need to dive deeper, the dedicated Results tab offers a comprehensive breakdown. Within this tab, data is typically grouped into categories such as Budget and Target recommendations. Advertisers can use various filtering options to isolate specific campaigns, date ranges, or types of recommendations. This level of granularity is vital for reporting to stakeholders and understanding which specific automated interventions are yielding the best results. Key Metrics and Reporting Windows The Results tab does not provide real-time data immediately after a click. Instead, it uses a specific methodology to ensure the data is statistically relevant and accurate. Understanding these windows is crucial for advertisers who want to interpret the data correctly. Google reports these results as a seven-day rolling average. This helps to smooth out daily fluctuations in traffic and conversion volume, providing a more stable view of performance trends. Furthermore, this reporting is measured across a 28-day window following the application of a recommendation. This 28-day period is significant because it accounts for various conversion windows and the time it takes for a user to move through the sales funnel. The metrics displayed in the Results tab are aligned with the campaign’s primary bidding objective. If your campaign is set to “Maximize Conversions,” the Results tab will focus on the number of conversions and the cost per conversion. If your objective is “Maximize Conversion Value” (common in e-commerce), the focus will be on the total value generated and the ROAS. This ensures that the impact reported is relevant to the specific goals the advertiser has defined for their campaigns. Why This Matters: Accountability in the Age of Automation The digital advertising landscape is moving rapidly toward full automation. With the rise of Performance Max campaigns and “Auto-apply” recommendations, advertisers are handing over more control to Google’s machine learning models than ever before. While this can lead to efficiency, it also leads to a sense of powerlessness among PPC professionals who need to justify every dollar spent. The Results tab introduces a much-needed layer of accountability. It allows advertisers to verify

Uncategorized

AI search KPIs: Focus on inclusion, not position

The Paradigm Shift: Why Rank #1 is No Longer the Only Goal For decades, the search engine optimization industry has lived and died by a single metric: Position 1. In the world of traditional Google search, the top spot is more than just a badge of honor; it is the “golden ticket” to digital success. Moving from the second result to the first can trigger a monumental shift in a business’s fortunes, often resulting in traffic and conversion increases of 100% to 300%. This winner-take-all dynamic has dictated every SEO strategy, from keyword targeting to backlink acquisition. However, as we enter the era of Artificial Intelligence and Large Language Models (LLMs), this foundational belief is being challenged. We are seeing a surge of SEO professionals on platforms like LinkedIn celebrating “ranking #1 on ChatGPT” as if it were the same accomplishment as winning the top spot on a Google SERP (Search Engine Results Page). But the reality is that AI search operates on a fundamentally different set of rules. In the world of AI, the focus must shift away from mere position and toward a much more nuanced goal: inclusion in the consideration set. The transition from a search engine that lists links to an AI engine that provides answers means that being first isn’t the primary driver of a click anymore. Instead, the quality of your inclusion and the way the AI describes your service determine whether or not a user decides to engage with your brand. User Behavior: Comparing Google Search vs. AI Interactions To understand why traditional KPIs are failing, we must first examine how user behavior differs between a search engine and an AI assistant. Recent research involving over 100 hours of observation shows that users interact with AI platforms like ChatGPT and Google’s AI Mode in ways that deviate significantly from traditional clicking patterns. In a standard Google search, the user is presented with a list of blue links. Each link represents a “potential” answer. To verify that answer, the user must click the link, visit the website, scan the content, and then decide if it meets their needs. This process involves friction. Every click is a commitment of time. Because of this friction, users naturally gravitate toward the top result to save effort. If the first result is “good enough,” the journey often ends there. Contrast this with the AI experience. When a user asks an AI for a service—for example, “Find me a fractional CMO for my startup”—the AI does the heavy lifting. It scans its training data or searches the web in real-time, synthesizes the information, and presents a curated list or a comparative summary. The friction of clicking through multiple websites is replaced by a single, easily scannable response. This leads to a critical behavioral change: users consider more options. The Power of the Consideration Set Our data reveals that AI users consider an average of 3.7 businesses before making a final decision on who to contact. In traditional search, a user might only look at the first or second result. In AI chat, the “consideration set” expands because the information is already summarized and presented in a side-by-side format within the chat window. Because the AI provides summaries of four, five, or even eight businesses at once, the value of being “Number 1” drops sharply. Simultaneously, the value of appearing in positions 2 through 8 rises. If 75% of users are looking past the first mention to evaluate the rest of the list, your goal isn’t just to be at the top; it’s to be the most compelling option in that group of 3.7 businesses. The Myth of Static Rankings in AI One of the most dangerous traps for modern SEOs is treating AI responses as a static leaderboard. Google’s organic rankings are relatively stable; while they fluctuate, they don’t usually change entirely from one minute to the next for the same user. AI search is different. It is probabilistic, not deterministic. AI models generate responses word-by-word based on probability. This means that a prompt entered at 9:00 AM might list your business first, while the exact same prompt at 9:05 AM might list you third, or format the entire response into a comparison table where “position” is irrelevant. Furthermore, AI platforms are designed to be conversational. A user might follow up with, “Which of these is best for a small budget?” or “Which one has the most experience in SaaS?” These refinements completely reshuffle the results based on context, not just generic authority. Focusing on a KPI like “ChatGPT Rank” is chasing a ghost. Instead, the focus should be on the “Inclusion Rate”—how often your brand appears when relevant queries are made within your niche, regardless of whether you are listed first, third, or fifth. Why Lower Rankings Win in LLM Environments In the traditional SEO mindset, if you are in position #8, you are essentially invisible. On page one of Google, the bottom results get a fraction of the traffic that the top three receive. In an AI chat environment, being #8 is far from a death sentence. It might actually be an opportunity to win the conversion through superior messaging. Consider a search for a local service, such as an ophthalmologist. An AI response might list five doctors. Even if a specific clinic—let’s call it Bannett Eye Centers—is listed last, the AI’s description might highlight that they specialize in exactly what the user is looking for, such as “advanced glaucoma care.” If the other four results are described as generalists, the user will likely bypass the first four options and contact the fifth. This happens because approximately 60% of users make their entire decision based on the AI response itself, without ever visiting the underlying website. They aren’t clicking through to verify; they are trusting the AI’s summary. Therefore, “winning” isn’t about being at the top of the list; it’s about ensuring the AI has the right information to label you as the “best fit” for

Uncategorized

Perplexity stops testing advertising

The Shift in AI Search Strategy: Perplexity’s Bold Move In the rapidly evolving landscape of artificial intelligence, the battle for dominance is no longer just about who has the most sophisticated large language model (LLM). It is increasingly about user trust, monetization strategies, and the definition of a “search engine” versus an “answer engine.” In a significant pivot that has sent ripples through the digital marketing and tech communities, Perplexity AI has officially halted its testing of advertising placements. This move signals a fundamental shift in how the company views its relationship with users and its long-term viability in a market currently dominated by Google and OpenAI. The decision to abandon sponsored placements—even those clearly labeled as ads—stems from a core belief that advertising risks undermining the very foundation of an AI-driven information platform: integrity. For a company that markets itself as a provider of objective, cited truths, the presence of paid influence creates a conflict of interest that Perplexity executives are no longer willing to gamble on. As the tech world watches, Perplexity is doubling down on a subscription-first model, betting that users are willing to pay for an unpolluted, high-utility research tool. The Rise and Fall of Advertising on Perplexity To understand why this decision is so significant, one must look back at the trajectory Perplexity has taken over the last year. In 2024, the company began experimenting with sponsored answers. These were designed to appear beneath chatbot responses, offering brands a way to insert themselves into the conversational flow of a user’s query. At the time, Perplexity assured its user base that these ads were clearly labeled and, more importantly, did not influence the actual algorithmic output of the AI’s primary response. However, the pilot program revealed a deeper psychological hurdle. While the technical policy ensured that ads were separate from the “organic” answer, the perception of the user told a different story. In the world of AI search, perception is often reality. If a user queries a medical question or a product recommendation and sees a brand name nearby, the suspicion of bias is naturally heightened. Perplexity’s leadership realized that for a tool designed to be the “best possible answer,” even the smallest seed of doubt regarding commercial influence could be fatal to the brand’s reputation. Consequently, the company has phased out these tests. While reports suggest that Perplexity could theoretically revisit advertising in the distant future, the current stance is one of caution. Executives have even floated the possibility that the platform may “never ever” need to rely on ad revenue, a bold claim in an industry where data-driven advertising has been the primary engine of growth for decades. Why Trust is the New Currency in AI The core of Perplexity’s argument against advertising is centered on the concept of the “integrity of the answer.” In a traditional search engine like Google, users have been conditioned for over twenty years to expect a mix of paid and organic results. There is a clear visual and mental separation between a sponsored link and a search result. However, an AI “answer engine” functions differently. It synthesizes information into a single, cohesive narrative. Inserting a sponsored element into that synthesis, or even adjacent to it, blurs the lines of authority. One executive noted that for the platform to succeed, a user must fundamentally believe that they are receiving the most accurate, unbiased information possible. Once an ad appears, the user begins to second-guess the response. Was this brand recommended because it is the best, or because it paid for the placement? Even if the answer is “the former,” the mere existence of the question erodes the user experience. By removing ads, Perplexity is attempting to create a “sanctuary of facts” that distinguishes it from the cluttered, ad-heavy environment of traditional search engines. What This Means for Brands and Digital Marketers For brands and digital marketing agencies, Perplexity’s exit from the advertising space is a double-edged sword. On one hand, it removes a direct, paid pathway to reach a high-intent, fast-growing audience. Perplexity currently handles approximately 780 million monthly queries and boasts a user base of over 100 million people. These are often high-value users—researchers, developers, students, and professionals—who are looking for deep insights rather than quick transactional links. Without the ability to buy sponsored placements, brands are now forced to rely entirely on organic visibility. In the context of Perplexity, this means appearing in the citations. Perplexity’s model works by searching the web in real-time and citing its sources. If a brand wants visibility within the platform, it must focus on “Generative Engine Optimization” (GEO). This involves: Building High-Authority Citations Since Perplexity relies on external websites to provide the raw data for its answers, brands must ensure their content is authoritative, factually dense, and easily scrapable by AI agents. Being cited as a source is the only way to gain exposure within a Perplexity response. Niche Authority and Thought Leadership Perplexity tends to favor sources that provide comprehensive answers to complex questions. Brands that invest in deep-dive white papers, original research, and expert commentary are more likely to be picked up as credible sources by the AI’s retrieval system. The End of the “Pay-to-Play” Shortcut In traditional search, a brand with a large budget can simply outbid competitors for the top spot. In the new Perplexity ecosystem, money cannot buy relevance. This levels the playing field for smaller companies with high-quality content, but it creates a significant challenge for legacy brands used to dominating through ad spend. The Subscription Model: A $200 Million Bet If Perplexity isn’t making money from ads, how does it plan to survive? The company’s core business is now firmly rooted in subscriptions. With annualized revenue already reaching approximately $200 million, the model appears to be gaining traction. Perplexity offers a free tier for casual users, but its “Pro” and “Enterprise” plans—ranging from $20 to $200 per month—are the real revenue drivers. These paid tiers offer users access to more advanced models (such

Uncategorized

How to apply ‘They Ask, You Answer’ to SEO and AI visibility

The Shift from Keywords to Conversations Search behavior has undergone a fundamental transformation. We are no longer in an era where users simply type fragmented keywords into a search bar and hope for the best. Today, search is conversational, inquisitive, and increasingly delegated to artificial intelligence. Whether through ChatGPT, Google Gemini, or Claude, users are outsourcing their complex decision-making processes to Large Language Models (LLMs). As Google evolves from a traditional search engine that lists links into a sophisticated “answer engine,” businesses must adapt. The challenge is no longer just about ranking for a specific term; it is about becoming the definitive source that an AI pulls from when a user asks a question. If the machine cannot find clear, honest, and direct information about your brand, it will simply find it elsewhere—likely from a competitor or a third-party aggregator. To survive and thrive in this AI-first landscape, businesses need a content framework that prioritizes the user’s needs above all else. This is where the “They Ask, You Answer” (TAYA) philosophy becomes an essential tool for modern SEO and AI visibility. What is ‘They Ask, You Answer’? “They Ask, You Answer” is a business philosophy and content marketing framework popularized by Marcus Sheridan. The premise is deceptively simple: your customers have questions, and your job is to answer them—honestly, thoroughly, and publicly. This includes the difficult questions that sales teams often try to dodge during the initial stages of a lead cycle. In traditional marketing, companies often hide their pricing, ignore their limitations, and avoid mentioning competitors. TAYA argues the opposite. By addressing these “taboo” topics head-on, you build radical trust. In the age of AI, where transparency is rewarded and obfuscation is penalized by algorithms seeking the most helpful content, TAYA provides a roadmap for digital authority. This strategy isn’t just about inbound marketing; it is a practical application of Google’s E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness) guidelines. By answering the questions your audience is actually asking, you signal to both humans and AI models that you are a reliable expert in your field. The Five Pillars of AI-Era Content The TAYA framework is built upon five core content categories. These represent the specific areas where buyers are most likely to seek clarity before making a purchase. In an AI environment, these categories are the primary data points that LLMs use to summarize your brand’s value proposition. 1. Pricing and Cost: Why Transparency is Mandatory One of the biggest friction points in the buyer’s journey is the lack of pricing information. Most businesses avoid publishing prices because “it depends” on various factors. While that might be true, silence is interpreted as a lack of transparency by the consumer and a lack of data by the AI. If you don’t provide cost information, an AI will summarize typical costs using data from your competitors or generic industry blogs. By failing to publish your own numbers, you lose control of the narrative. To apply TAYA here, you should publish price ranges, explain the variables that drive costs up or down, and provide example packages (e.g., Good, Better, Best). A classic success story in this category is Yale Appliance. By being brutally honest about the costs and reliability of different appliance brands, they transformed their website into a powerhouse of inbound leads. They didn’t just sell fridges; they sold the information required to buy one confidently. 2. Problems: Turning Weaknesses into Strengths Every product or service has drawbacks. Instead of hiding them, TAYA encourages you to own them. This category focuses on the limitations, risks, and scenarios where your solution might not be the right fit. AI systems are designed to provide balanced guidance; a page that only lists benefits looks like a sales pitch, whereas a page that acknowledges trade-offs looks like expert advice. When you address the “problems” associated with your industry or your specific product, you demonstrate the “Experience” and “Trust” components of E-E-A-T. For example, if you are a small agency, you might write about the limitations of working with a boutique firm versus a global conglomerate. This reframes a perceived weakness as a source of honesty, which builds massive credibility with both users and AI evaluators. 3. Versus and Comparisons: Reducing Cognitive Load Before a customer makes a final decision, they almost always compare two or more options. These “VS” queries are goldmines for AI visibility. LLMs love structured data, and comparison articles lend themselves perfectly to the tables and summaries that AI search features often highlight. To win here, you must compare products based on actual use cases, not just a checklist of features. Use a consistent framework: price, ease of setup, expected outcomes, and risk factors. By providing the clearest comparison on the web, you ensure that your brand is the primary source cited when an AI tool answers the question, “What is the difference between Product A and Product B?” 4. Reviews and Case Studies This goes beyond simply asking for a five-star rating on Google. It involves creating long-form review content that helps buyers evaluate their options. AI tools frequently crawl review-style pages because they are inherently evaluative and structured. Your advantage over a generic review site is your first-hand experience and contextual truth. Review the tools you use, the services you provide, and even the industry standards you follow. Be honest about the pros and cons. When you sound like a source of objective truth rather than a promotional advertisement, you increase the likelihood of being cited as an authority in AI-generated summaries. 5. Best in Class: The Courage to Recommend Others Perhaps the boldest part of Marcus Sheridan’s philosophy is the recommendation to highlight the “best” in your industry, even if that list includes your competitors. The goal is to become a trusted educator. If a user searches for the “best SEO agencies in London,” and you provide a curated, honest list of the top firms (including yourself and others), you become the authority that facilitated their research. The “Answer

Uncategorized

How to build AI confidence inside your SEO team

The SEO industry is no stranger to upheaval. For those of us who have spent more than two decades in this field, we have seen the landscape transform countless times. We remember the early days of keyword stuffing to trick AltaVista, the seismic shift when Google introduced its first major algorithms, the transition to mobile-first indexing, and the Core Web Vitals era. Each of these milestones was initially met with a mixture of skepticism and anxiety. However, the current shift toward Artificial Intelligence feels fundamentally different. It isn’t just another technical update or a change in ranking factors; it is a shift in how work is actually performed. The speed of change is unprecedented, and the emotional weight it carries is significant. Across the industry, even seasoned professionals are feeling the pressure. The concern is no longer just “how do I rank?” but rather, “if AI can do this faster and cheaper, where do I fit in?” This is not a technical dilemma—it is a human one. When a team feels that their expertise is being marginalized by a machine, morale drops, adoption of new technology stalls, and productivity suffers. Some team members may over-rely on AI, losing their critical thinking skills, while others may avoid it entirely out of fear or resentment. As a leader, your challenge is not just to deploy tools, but to build confidence, capability, and trust within your SEO team. The Emotional Hurdle of AI Integration Before implementing any new AI workflow, leadership must acknowledge the psychological impact of automation. When teams hear that “AI will increase efficiency by 50%,” they often translate that as “the company will eventually need 50% fewer people.” Addressing this head-on is the first step toward building confidence. Confidence in AI isn’t built through mandates or software demos. It is built through culture. Technology adoption is largely a cultural phenomenon; as research from Harvard Business School suggests, tools do not drive change—trust does. In the context of SEO, this means creating an environment where AI is seen as a “power suit” for the practitioner, not a replacement for them. The goal is to move from a state of uncertainty to a state of intentional, disciplined use of AI. 4 Strategies for Building AI Confidence in SEO Teams Building real confidence requires a shift in perspective. The most effective SEO teams are not necessarily those with the most expensive tool stacks; they are the teams that use AI with a specific purpose. They use it to automate the “drudge work”—data pulls, research summaries, and keyword clustering—so that the humans in the room can focus on high-level strategy, creative storytelling, and stakeholder alignment. Here are four actionable strategies to foster a culture of AI confidence. 1. Earn Trust by Involving the Team in AI Tool Selection and Workflow Design One of the fastest ways to breed resentment is to impose a top-down solution without consulting the people who will use it every day. People trust what they help create. Moving from a top-down implementation model to one of shared ownership is essential for long-term success. When you involve your SEO specialists in the evaluation process, you empower them. They transition from being “targets of automation” to “architects of the future.” This early involvement also serves a practical purpose: your front-line workers often have the best insights into where a workflow is broken or where an AI tool might introduce new risks, such as data inaccuracies or brand-voice inconsistencies. To implement this, leaders should: Invite teams to test tools: Set up “sandboxes” where team members can experiment with different LLMs (Large Language Models) or SEO-specific AI platforms and share their honest feedback. Run pilot programs: Before rolling out an AI content assistant to the entire department, run a small experiment with one or two people to identify friction points. Be transparent about the “Why”: Clearly communicate why certain tools were adopted and, equally importantly, why others were rejected. This transparency builds credibility. When teams feel like they have a seat at the table, they are much more likely to lean into the technology rather than push it away. 2. Meet People Where They Are, Not Where You Want Them to Be AI capability is not uniform across any organization. On a single SEO team, you might have one person who is already building custom GPTs and another who is still skeptical that AI can write a coherent meta description. Pushing everyone to the same level of adoption at the same speed is a recipe for burnout. Strong leaders recognize that capability develops at different speeds. You must create a “psychological safety” zone where it is okay to say, “I don’t know how to use this yet.” Avoid shaming those who are slow to adopt and, conversely, avoid over-celebrating the “early adopters” in a way that makes others feel obsolete. Strategies for inclusive growth include: Normalizing uncertainty: Make “learning out loud” a part of your team meetings. Encouraging people to share their struggles with AI is just as important as sharing their successes. Providing multiple learning paths: Some people learn best through structured courses, while others prefer hands-on tinkering. Offer resources that cater to both. Removing the pressure of perfection: Encourage experimentation where the stakes are low. If an AI experiment fails, treat it as a data point, not a performance issue. 3. Celebrate Wins and Highlight Champions Confidence is contagious. When a team member successfully uses an AI prompt to cut a four-hour keyword mapping task down to fifteen minutes, that win should be amplified. These “micro-wins” prove that AI is a tool for liberation, not just a tool for output. In many successful agencies, internal focus groups have become a staple. These groups—composed of members from SEO, operations, and leadership—work together to find practical applications for AI. For example, a focus group might spend a month figuring out how to best integrate AI into project management or client reporting. Key actions to highlight success: Internal Demos: Dedicate time in weekly meetings

Uncategorized

5 PPC Strategies That Actually Boost Conversions in 2026 via @sejournal, @CallRail

Introduction: The PPC Landscape in 2026 The digital advertising landscape has undergone a seismic shift over the last few years. As we move through 2026, the traditional methods of “set it and forget it” pay-per-click (PPC) management are officially obsolete. We are now operating in an era defined by sophisticated artificial intelligence, the total sunsetting of third-party cookies, and a consumer base that demands hyper-relevance and instant gratification. For marketers and business owners, the challenge is no longer just about getting the highest click-through rate (CTR). It is about the quality of those clicks and the efficiency with which they convert into revenue. With rising costs per click (CPC) and increased competition across search engines and social platforms, your PPC strategy must be more than just visible—it must be surgical. To thrive in this environment, brands must leverage deep data integration, automated creative workflows, and a profound understanding of user intent. In this guide, we explore five definitive PPC strategies that are driving actual, measurable conversions in 2026. These strategies move beyond the basics of keyword bidding, focusing instead on the holistic ecosystem of the modern buyer’s journey. 1. Harnessing Predictive AI and Intent-Based Audience Modeling By 2026, the focus of PPC has shifted from matching keywords to predicting intent. In the past, advertisers spent hours meticulously refining negative keyword lists and testing phrase match variations. Today, the platforms’ internal algorithms—powered by advanced neural networks—have become so adept at understanding user behavior that “broad match” is often more effective than “exact match,” provided it is fed the right signals. Moving from Keywords to Signals The most successful PPC campaigns in 2026 rely on intent-based audience modeling. Instead of targeting someone searching for “best running shoes,” modern AI allows you to target a user who has recently visited fitness blogs, tracked a run on a smartwatch, and searched for local marathon dates. This holistic view of the user profile is what drives conversions. Predictive Bidding for High-Value Conversions Smart Bidding has evolved into Predictive Bidding. Algorithms now analyze thousands of signals in real-time—such as the time of day, device type, location, and even local weather—to determine the likelihood of a conversion. The strategy here is to move away from “Maximize Conversions” and toward “Maximize Conversion Value.” By assigning different values to different actions (e.g., a newsletter sign-up vs. a completed purchase), you allow the AI to prioritize budget for the users most likely to generate high lifetime value. 2. Leveraging First-Party Data and Call Intelligence Integration With the death of third-party cookies and the tightening of privacy regulations like GDPR and CCPA, the reliance on platform-provided data is no longer enough. The most successful advertisers in 2026 are those who own their data. This is where first-party data and call intelligence tools, such as CallRail, become indispensable. Closing the Offline-to-Online Gap For many industries—such as healthcare, legal, home services, and B2B software—the conversion often happens offline via a phone call. If your PPC data only tracks form fills, you are missing half the picture. Integrating call tracking software into your PPC stack allows you to attribute a specific phone call back to the exact ad, keyword, and campaign that triggered it. In 2026, this integration is seamless. When a prospect calls your business, AI-driven conversation intelligence analyzes the call in real-time, identifying keywords and sentiment to determine if the lead was “qualified.” This data is then fed back into Google Ads or Meta Ads as a conversion signal, teaching the algorithm exactly what a “good” lead looks like. This feedback loop is the secret weapon for boosting ROI in high-touch industries. Building Privacy-Safe Customer Lists First-party data is the fuel for modern PPC. By uploading hashed customer lists into your ad platforms, you can create “Enhanced Conversions” and “Predictive Lookalikes.” This allows the ad platforms to find new users who mirror the behaviors of your highest-paying customers, all while staying compliant with modern privacy standards. 3. Scaling Hyper-Personalized Creative with Generative AI In 2026, the “creative” is the new targeting. As ad platforms automate more of the technical backend, the primary lever left for human marketers is the quality and relevance of the ad copy and imagery. However, manual creative production cannot keep up with the demand for personalization. The Rise of Dynamic Creative Optimization (DCO) Generative AI has revolutionized how we approach ad assets. Modern PPC strategies utilize Dynamic Creative Optimization to serve thousands of variations of an ad to different segments of the audience. For example, a travel brand can automatically generate different background images and headlines based on the user’s current location or past travel history. The Human-AI Collaboration While AI generates the variations, the strategy remains human. The key to boosting conversions in 2026 is ensuring that your brand voice remains consistent. Marketers are now “Creative Directors” of the AI, setting the guardrails for tone, style, and brand ethics. Ads that feel personal and authentic—rather than generic and computer-generated—are the ones that cut through the noise and drive action. Video Content at Scale Video is no longer optional for PPC. With the dominance of YouTube Shorts, TikTok, and Instagram Reels, short-form video ads have become the highest-converting asset type. Using AI tools to turn static product images into high-energy video ads allows brands to maintain a presence across all placements without the traditional costs of video production. 4. Omnichannel Synergy: Search, Social, and Retail Media The customer journey in 2026 is messy and fragmented. A user might discover a product on TikTok, search for reviews on Google, and finally purchase it through an Amazon ad. PPC strategies that operate in silos are destined to fail. To boost conversions, you must implement a cross-channel strategy that treats the web as a single ecosystem. Breaking Down the Silos Omnichannel synergy means that your search ads and social ads are talking to each other. For instance, if a user clicks a Search ad but doesn’t convert, they should immediately be entered into a retargeting sequence on social media that

Uncategorized

Google launches more visible links in AI Overviews and AI Mode

The landscape of digital search is undergoing its most significant transformation since the invention of the crawler. Google, the undisputed leader in the search engine market, has officially taken another step toward bridging the gap between artificial intelligence and the open web. The tech giant recently announced and rolled out a significant update to its AI-driven search features: the introduction of more visible, interactive links within AI Overviews and the dedicated AI Mode. This update is not merely a cosmetic change; it represents a fundamental shift in how Google balances the utility of generative AI with its responsibility to the broader ecosystem of publishers, creators, and businesses. For months, the SEO community and digital publishers have expressed concerns that AI-generated summaries would lead to a “zero-click” reality, where users get all the information they need from the search results page without ever visiting the source website. These new link cards and enhanced icons appear to be Google’s direct answer to those concerns. Understanding the New AI Link Experience The update focuses on two primary areas: AI Overviews (formerly known as the Search Generative Experience or SGE) and the newer, more immersive AI Mode. The primary goal is to make citations more prominent and to reduce the friction required for a user to move from an AI-generated summary to a deep-dive article on a publisher’s site. Desktop Innovation: Hoverable Contextual Link Cards On desktop devices, Google has introduced a “hover” state for links cited within AI responses. When a user navigates their cursor over a specific citation or link within the AI Overview, a pop-up window or “link card” automatically appears. This card isn’t just a simple URL; it is a rich preview of the destination page. These contextual overlays typically include the website’s name, a prominent favicon or brand icon, and more descriptive details about the content. By providing this preview, Google allows users to verify the credibility of the source at a glance. It also acts as a “mini-landing page” that can entice the user to click through if the preview suggests the source contains the specific nuance or detail the AI summary might have missed. Mobile and Cross-Platform Enhancements: Prominent Icons While the hover functionality is specific to the desktop experience (where cursors allow for such interactions), the visual update extends to mobile as well. Google has overhauled the way link icons are displayed within the AI responses across all devices. These icons are now larger, more colorful, and more descriptive. Instead of being tucked away at the bottom or hidden behind a dropdown menu, the links are integrated directly into the flow of the AI’s answer. This “prominent” placement ensures that even on smaller screens, the user is constantly aware that the information they are reading is sourced from the live web. It transforms the AI response from a monolithic block of text into a collaborative directory of resources. The Official Word from Google The rollout was confirmed by Robby Stein, a high-ranking executive at Google, who shared the news via a post on X (formerly Twitter). Stein highlighted that the update was designed specifically to facilitate deeper exploration of the web. He noted that in the new AI Mode, groups of links will automatically appear in pop-ups on desktop, allowing users to “jump right into a website to learn more.” Perhaps most importantly for SEOs and publishers, Stein revealed that Google’s internal testing showed this new user interface (UI) is significantly more engaging than the previous iteration. According to Google, these changes make it easier for users to access “great content across the web,” implying that the click-through rates (CTR) for these links may be higher than what was seen in earlier beta versions of AI Overviews. The Evolution of AI Overviews and AI Mode To understand why this update is so critical, we must look at the trajectory of Google’s AI integration. AI Overviews began as an experimental feature in Search Labs. Initially, citations were often criticized for being “hidden” behind expandable carousels or placed at the very end of long AI-generated responses. Publishers feared that their content was being used to train the model and answer queries, while the traffic that traditionally sustained them was being diverted. AI Mode, on the other hand, represents Google’s move toward a chat-based search interface, similar to competitors like Perplexity or OpenAI’s SearchGPT. In this mode, the conversation is more fluid. By integrating highly visible links into this conversational flow, Google is attempting to maintain its identity as a “portal” to the web rather than just an “answer engine.” Why Increased Link Visibility Matters for SEO For search engine optimization professionals, this update is a double-edged sword that leans toward optimism. The increased visibility of links is a clear signal that Google is listening to the feedback of the publishing industry. Here is why this shift is significant for the SEO landscape: 1. Boosting Click-Through Rates (CTR) In the early days of AI search, many feared that the CTR for organic results would plummet. While AI Overviews do take up significant “above the fold” real estate, the introduction of rich link cards means that being cited as a source is now more valuable than ever. A well-designed favicon and a clear, descriptive site name can now act as a brand advertisement right within the AI result. 2. The Importance of Brand Authority Since the new link cards pull prominent details about a website, brand authority becomes even more vital. If a user hovers over a link and sees a recognized, trusted brand name in the pop-up card, they are much more likely to click. This reinforces the importance of Google’s E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness) guidelines. 3. Real-Time Citation Value Google’s AI Mode and Overviews are increasingly used for “discovery” queries—topics where the user is looking for recommendations, reviews, or explanations. By making links more visible, Google is encouraging a “discovery” behavior. Users might start with the AI to get the gist of a topic and

Uncategorized

The Classifier Layer: Spam, Safety, Intent, Trust Stand Between You And The Answer via @sejournal, @DuaneForrester

The Fundamental Shift in Modern Search Architecture For decades, the world of Search Engine Optimization (SEO) operated under a relatively straightforward paradigm: indexation, crawling, and ranking. If a website was technically sound and possessed enough backlink authority, it could generally expect to climb the search engine results pages (SERPs). However, the rise of Generative AI and Large Language Models (LLMs) has introduced a sophisticated new gatekeeper that sits between a website’s content and the end user. This gatekeeper is known as the Classifier Layer. As industry experts like Duane Forrester have noted, visibility in the era of AI-driven answers is no longer just about being “better” than a competitor. It is about passing a rigorous series of automated tests designed to ensure that only the most helpful, safe, and relevant information reaches the user. Before an AI even considers ranking your content, it must first decide if your content is allowed to exist within its response framework. This article explores the four pillars of the Classifier Layer—Spam, Safety, Intent, and Trust—and how they dictate the future of digital visibility. Understanding the Classifier Layer In traditional search, a query triggers a retrieval from an index. In AI-powered search (such as Google’s AI Overviews, Perplexity, or ChatGPT), the process is far more complex. The system uses “classifiers”—machine learning models specifically trained to categorize and filter data—to evaluate information in real-time. These classifiers act as a sieve. If your content fails at the classifier level, it is essentially invisible. It doesn’t matter if your keyword density is perfect or if your site loads in under a second. If the classifier flags your page as untrustworthy or irrelevant to the safety guardrails of the AI, it will never be synthesized into an AI-generated answer. To survive this shift, marketers and creators must understand the specific criteria these classifiers are looking for. The First Gate: Spam and the Battle for Quality Spam detection has evolved significantly from the days of simple keyword stuffing and hidden text. Modern spam classifiers are powered by neural networks that can identify “thin” content, programmatic junk, and low-effort AI-generated text that offers no unique value. The goal of the spam classifier is to protect the integrity of the AI’s knowledge base. When an AI engine processes a query, it looks for high-signal information. Spam classifiers are designed to weed out high-noise content. This includes content that exists solely to capture search traffic without providing a genuine solution to a problem. If a website publishes thousands of pages of “filler” content designed to rank for long-tail keywords, the spam classifier will likely flag the entire domain, preventing any of its pages from being used as a source for an AI answer. To pass this gate, content must demonstrate human-centric utility. This means moving away from generic summaries and toward original reporting, unique insights, and comprehensive data that cannot be easily replicated by a basic prompt. The Second Gate: Safety and Policy Guardrails Safety is perhaps the most rigid of the four classifiers. Tech companies providing AI answers are under immense pressure to prevent their models from generating harmful, illegal, or biased content. Consequently, safety classifiers are exceptionally sensitive. They are programmed to block any content that might lead to a “hallucination” that could cause real-world harm. The safety classifier looks for content that violates specific policy guidelines, including: Medical Misinformation: Advice that contradicts established scientific consensus. Financial Harm: High-risk financial advice without proper licensing or context. Dangerous Activities: Content that encourages or explains how to perform illegal or harmful acts. Hate Speech and Harassment: Any language that could be interpreted as discriminatory or aggressive. For businesses in the “Your Money or Your Life” (YMYL) sectors, the safety gate is the most difficult to clear. If your content deals with health, wealth, or safety, it undergoes a much higher level of scrutiny. The classifier layer will prioritize sources that are recognized as “safe” and authoritative, often discarding newer or more controversial voices to minimize the risk of the AI providing a dangerous answer. The Third Gate: Decoding User Intent In the past, intent was often categorized into simple buckets: informational, navigational, or transactional. While those categories still matter, the AI intent classifier is much more nuanced. It uses semantic understanding to determine whether a piece of content actually solves the specific problem the user is facing at that exact moment. The intent classifier asks: “Does this content provide the most direct and useful path to the user’s goal?” If a user asks “how to fix a leaky faucet,” the intent classifier will prioritize content that provides a step-by-step guide, a list of tools, and potential pitfalls. It will deprioritize a 2,000-word blog post that spends the first 800 words discussing the history of indoor plumbing. The rise of AI search means that fluff is a liability. The classifier layer is designed to extract the “meat” of the content. If the core intent is buried under layers of SEO-driven filler, the classifier may fail to recognize the value of the page, leading to a loss in visibility. Optimization now requires a laser-like focus on answering the user’s query as efficiently as possible. The Fourth Gate: The Trust Layer and Authority Trust is the final, and perhaps most significant, barrier. In a world where AI can generate text that looks professional but may be factually incorrect, “Trust” has become the primary currency of the internet. The trust classifier evaluates the reputation of the source, the credentials of the author, and the historical accuracy of the domain. This is where E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness) moves from being a guideline to a technical requirement. The trust classifier checks for: Source Verification: Does this site have a history of being cited by other reputable organizations? Authorial Expertise: Who wrote this? Are they a recognized expert in their field? Factual Consistency: Does the information provided align with known facts across the web, or is it an outlier? Transparency: Is the site clear about its ownership,

Uncategorized

Airbnb says traffic from AI chatbots converts better than Google

The Shifting Landscape of Digital Discovery The digital marketing world was recently shaken by a revelation from one of the industry’s most influential leaders. During Airbnb’s Q4 2025 earnings call on February 12, CEO Brian Chesky shared a data point that confirms what many tech analysts have suspected: the era of search engine dominance is facing a significant challenge from generative AI. According to Chesky, traffic arriving at Airbnb via AI chatbots is converting at a higher rate than traffic originating from Google. This statement marks a pivotal moment in the evolution of the internet. For over two decades, Google has been the undisputed gatekeeper of the web, serving as the primary funnel for discovery and commerce. However, the rise of conversational interfaces—powered by Large Language Models (LLMs)—is beginning to rewire how consumers find what they are looking for. While Chesky did not provide specific conversion percentages or exact traffic volumes, the qualitative trend is clear: users who interact with AI before landing on a booking page are more likely to complete a transaction. Why AI Chatbot Traffic Outperforms Traditional Search To understand why a visitor from ChatGPT or Claude might convert better than one from a standard Google search, we have to look at the “intent” behind the click. Traditional search engines often require the user to do the heavy lifting. A traveler might type “best beach houses in Mexico” into Google and then spend an hour sifting through ten different tabs, comparing prices, amenities, and locations. In contrast, AI chatbots act as a discovery layer that handles the synthesis of information before the user ever clicks a link. By the time a user asks an AI to “find a quiet villa in Tulum with a private pool and high-speed Wi-Fi for under $300 a night” and receives a specific recommendation, the discovery phase is largely complete. The click-through to Airbnb is no longer an act of exploration; it is an act of execution. The user isn’t browsing; they are arriving ready to book. The Qualified Lead Advantage This phenomenon aligns with predictions made by tech giants like Microsoft and Google itself. Both companies have suggested that while AI search might lead to a lower volume of total clicks compared to traditional search, the clicks that do occur will be of significantly higher quality. For a business like Airbnb, this is an ideal scenario. High-volume, low-intent traffic often leads to high bounce rates and increased server costs without a corresponding increase in revenue. High-intent traffic from AI assistants allows for a more efficient sales funnel. The Key Players: ChatGPT, Gemini, and Claude During the earnings call, Chesky referenced a variety of AI platforms, including OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude. He framed these not as competitors that might “disintermediate” or hide Airbnb from the user, but rather as powerful acquisition partners. The diversity of the AI landscape is a benefit to platforms like Airbnb, as it prevents a single entity from monopolizing the discovery phase. Chesky positioned these chatbots as “top-of-funnel discovery engines.” He noted that they are fundamentally similar to search in their objective—connecting a user with information—but superior in their ability to understand nuance and context. As these models become more sophisticated, they will likely become the primary starting point for complex planning tasks, such as organizing a multi-city vacation or finding niche accommodations that match specific lifestyle needs. Airbnb’s Internal AI Evolution: From Search to Knowing the User While external AI chatbots are driving high-converting traffic to the site, Airbnb is also aggressively integrating AI into its own architecture. Chesky’s vision for the future of the platform is “AI-native.” This means the app will eventually move beyond a simple search bar and become a personalized concierge that “knows you.” Conversational Search Within the App Airbnb is currently testing an internal, AI-powered conversational search feature. Rather than a wide-scale rollout, the company is following a philosophy of rapid iteration. Currently, this AI search is live for a very small percentage of traffic, allowing the engineering team to gather data and refine the experience in real-time. The goal is to make the search process feel like a conversation with a travel expert rather than a database query. The Operational Power of AI Agents The impact of AI at Airbnb isn’t limited to the front-end user experience; it is also transforming the company’s operations. Chesky revealed that Airbnb’s in-house AI customer service agent is already resolving nearly one-third (30%) of North American support tickets without any human intervention. Currently, this tool is English-only, but the company has ambitious plans to roll out multilingual support and voice-based AI assistance globally. Chesky set a high bar for the coming year, stating that the goal is for AI to handle “significantly more than 30%” of tickets. By automating routine inquiries—such as booking modifications, cancellation policy clarifications, or basic troubleshooting—Airbnb can free up its human support staff to handle more complex and sensitive issues, ultimately improving the overall guest and host experience. The Strategic Shift Away from Performance Marketing Airbnb’s embrace of AI discovery is consistent with its broader marketing strategy over the last few years. Long before the public release of ChatGPT, Airbnb began shifting its budget away from traditional performance marketing—specifically Google search ads—and toward brand marketing. The company bet that building a strong, recognizable brand would be more sustainable than constantly paying for the top spot on a Google results page. This move appears prescient in the context of the AI revolution. If discovery is moving away from the “ten blue links” of Google and toward personalized AI recommendations, brand equity becomes more important than ever. If an AI is asked for a “vacation rental,” you want the AI to think of “Airbnb” as the synonymous term for that category. The Future of Advertising in an AI World One of the biggest questions facing the tech industry is how monetization will work in a world dominated by AI chatbots. On the earnings call, Chesky addressed the prospect of

Scroll to Top