Uncategorized

Uncategorized

How to apply ‘They Ask, You Answer’ to SEO and AI visibility

The Shift from Keywords to Conversations Search behavior has undergone a fundamental transformation. We are no longer in an era where users simply type fragmented keywords into a search bar and hope for the best. Today, search is conversational, inquisitive, and increasingly delegated to artificial intelligence. Whether through ChatGPT, Google Gemini, or Claude, users are outsourcing their complex decision-making processes to Large Language Models (LLMs). As Google evolves from a traditional search engine that lists links into a sophisticated “answer engine,” businesses must adapt. The challenge is no longer just about ranking for a specific term; it is about becoming the definitive source that an AI pulls from when a user asks a question. If the machine cannot find clear, honest, and direct information about your brand, it will simply find it elsewhere—likely from a competitor or a third-party aggregator. To survive and thrive in this AI-first landscape, businesses need a content framework that prioritizes the user’s needs above all else. This is where the “They Ask, You Answer” (TAYA) philosophy becomes an essential tool for modern SEO and AI visibility. What is ‘They Ask, You Answer’? “They Ask, You Answer” is a business philosophy and content marketing framework popularized by Marcus Sheridan. The premise is deceptively simple: your customers have questions, and your job is to answer them—honestly, thoroughly, and publicly. This includes the difficult questions that sales teams often try to dodge during the initial stages of a lead cycle. In traditional marketing, companies often hide their pricing, ignore their limitations, and avoid mentioning competitors. TAYA argues the opposite. By addressing these “taboo” topics head-on, you build radical trust. In the age of AI, where transparency is rewarded and obfuscation is penalized by algorithms seeking the most helpful content, TAYA provides a roadmap for digital authority. This strategy isn’t just about inbound marketing; it is a practical application of Google’s E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness) guidelines. By answering the questions your audience is actually asking, you signal to both humans and AI models that you are a reliable expert in your field. The Five Pillars of AI-Era Content The TAYA framework is built upon five core content categories. These represent the specific areas where buyers are most likely to seek clarity before making a purchase. In an AI environment, these categories are the primary data points that LLMs use to summarize your brand’s value proposition. 1. Pricing and Cost: Why Transparency is Mandatory One of the biggest friction points in the buyer’s journey is the lack of pricing information. Most businesses avoid publishing prices because “it depends” on various factors. While that might be true, silence is interpreted as a lack of transparency by the consumer and a lack of data by the AI. If you don’t provide cost information, an AI will summarize typical costs using data from your competitors or generic industry blogs. By failing to publish your own numbers, you lose control of the narrative. To apply TAYA here, you should publish price ranges, explain the variables that drive costs up or down, and provide example packages (e.g., Good, Better, Best). A classic success story in this category is Yale Appliance. By being brutally honest about the costs and reliability of different appliance brands, they transformed their website into a powerhouse of inbound leads. They didn’t just sell fridges; they sold the information required to buy one confidently. 2. Problems: Turning Weaknesses into Strengths Every product or service has drawbacks. Instead of hiding them, TAYA encourages you to own them. This category focuses on the limitations, risks, and scenarios where your solution might not be the right fit. AI systems are designed to provide balanced guidance; a page that only lists benefits looks like a sales pitch, whereas a page that acknowledges trade-offs looks like expert advice. When you address the “problems” associated with your industry or your specific product, you demonstrate the “Experience” and “Trust” components of E-E-A-T. For example, if you are a small agency, you might write about the limitations of working with a boutique firm versus a global conglomerate. This reframes a perceived weakness as a source of honesty, which builds massive credibility with both users and AI evaluators. 3. Versus and Comparisons: Reducing Cognitive Load Before a customer makes a final decision, they almost always compare two or more options. These “VS” queries are goldmines for AI visibility. LLMs love structured data, and comparison articles lend themselves perfectly to the tables and summaries that AI search features often highlight. To win here, you must compare products based on actual use cases, not just a checklist of features. Use a consistent framework: price, ease of setup, expected outcomes, and risk factors. By providing the clearest comparison on the web, you ensure that your brand is the primary source cited when an AI tool answers the question, “What is the difference between Product A and Product B?” 4. Reviews and Case Studies This goes beyond simply asking for a five-star rating on Google. It involves creating long-form review content that helps buyers evaluate their options. AI tools frequently crawl review-style pages because they are inherently evaluative and structured. Your advantage over a generic review site is your first-hand experience and contextual truth. Review the tools you use, the services you provide, and even the industry standards you follow. Be honest about the pros and cons. When you sound like a source of objective truth rather than a promotional advertisement, you increase the likelihood of being cited as an authority in AI-generated summaries. 5. Best in Class: The Courage to Recommend Others Perhaps the boldest part of Marcus Sheridan’s philosophy is the recommendation to highlight the “best” in your industry, even if that list includes your competitors. The goal is to become a trusted educator. If a user searches for the “best SEO agencies in London,” and you provide a curated, honest list of the top firms (including yourself and others), you become the authority that facilitated their research. The “Answer

Uncategorized

How to build AI confidence inside your SEO team

The SEO industry is no stranger to upheaval. For those of us who have spent more than two decades in this field, we have seen the landscape transform countless times. We remember the early days of keyword stuffing to trick AltaVista, the seismic shift when Google introduced its first major algorithms, the transition to mobile-first indexing, and the Core Web Vitals era. Each of these milestones was initially met with a mixture of skepticism and anxiety. However, the current shift toward Artificial Intelligence feels fundamentally different. It isn’t just another technical update or a change in ranking factors; it is a shift in how work is actually performed. The speed of change is unprecedented, and the emotional weight it carries is significant. Across the industry, even seasoned professionals are feeling the pressure. The concern is no longer just “how do I rank?” but rather, “if AI can do this faster and cheaper, where do I fit in?” This is not a technical dilemma—it is a human one. When a team feels that their expertise is being marginalized by a machine, morale drops, adoption of new technology stalls, and productivity suffers. Some team members may over-rely on AI, losing their critical thinking skills, while others may avoid it entirely out of fear or resentment. As a leader, your challenge is not just to deploy tools, but to build confidence, capability, and trust within your SEO team. The Emotional Hurdle of AI Integration Before implementing any new AI workflow, leadership must acknowledge the psychological impact of automation. When teams hear that “AI will increase efficiency by 50%,” they often translate that as “the company will eventually need 50% fewer people.” Addressing this head-on is the first step toward building confidence. Confidence in AI isn’t built through mandates or software demos. It is built through culture. Technology adoption is largely a cultural phenomenon; as research from Harvard Business School suggests, tools do not drive change—trust does. In the context of SEO, this means creating an environment where AI is seen as a “power suit” for the practitioner, not a replacement for them. The goal is to move from a state of uncertainty to a state of intentional, disciplined use of AI. 4 Strategies for Building AI Confidence in SEO Teams Building real confidence requires a shift in perspective. The most effective SEO teams are not necessarily those with the most expensive tool stacks; they are the teams that use AI with a specific purpose. They use it to automate the “drudge work”—data pulls, research summaries, and keyword clustering—so that the humans in the room can focus on high-level strategy, creative storytelling, and stakeholder alignment. Here are four actionable strategies to foster a culture of AI confidence. 1. Earn Trust by Involving the Team in AI Tool Selection and Workflow Design One of the fastest ways to breed resentment is to impose a top-down solution without consulting the people who will use it every day. People trust what they help create. Moving from a top-down implementation model to one of shared ownership is essential for long-term success. When you involve your SEO specialists in the evaluation process, you empower them. They transition from being “targets of automation” to “architects of the future.” This early involvement also serves a practical purpose: your front-line workers often have the best insights into where a workflow is broken or where an AI tool might introduce new risks, such as data inaccuracies or brand-voice inconsistencies. To implement this, leaders should: Invite teams to test tools: Set up “sandboxes” where team members can experiment with different LLMs (Large Language Models) or SEO-specific AI platforms and share their honest feedback. Run pilot programs: Before rolling out an AI content assistant to the entire department, run a small experiment with one or two people to identify friction points. Be transparent about the “Why”: Clearly communicate why certain tools were adopted and, equally importantly, why others were rejected. This transparency builds credibility. When teams feel like they have a seat at the table, they are much more likely to lean into the technology rather than push it away. 2. Meet People Where They Are, Not Where You Want Them to Be AI capability is not uniform across any organization. On a single SEO team, you might have one person who is already building custom GPTs and another who is still skeptical that AI can write a coherent meta description. Pushing everyone to the same level of adoption at the same speed is a recipe for burnout. Strong leaders recognize that capability develops at different speeds. You must create a “psychological safety” zone where it is okay to say, “I don’t know how to use this yet.” Avoid shaming those who are slow to adopt and, conversely, avoid over-celebrating the “early adopters” in a way that makes others feel obsolete. Strategies for inclusive growth include: Normalizing uncertainty: Make “learning out loud” a part of your team meetings. Encouraging people to share their struggles with AI is just as important as sharing their successes. Providing multiple learning paths: Some people learn best through structured courses, while others prefer hands-on tinkering. Offer resources that cater to both. Removing the pressure of perfection: Encourage experimentation where the stakes are low. If an AI experiment fails, treat it as a data point, not a performance issue. 3. Celebrate Wins and Highlight Champions Confidence is contagious. When a team member successfully uses an AI prompt to cut a four-hour keyword mapping task down to fifteen minutes, that win should be amplified. These “micro-wins” prove that AI is a tool for liberation, not just a tool for output. In many successful agencies, internal focus groups have become a staple. These groups—composed of members from SEO, operations, and leadership—work together to find practical applications for AI. For example, a focus group might spend a month figuring out how to best integrate AI into project management or client reporting. Key actions to highlight success: Internal Demos: Dedicate time in weekly meetings

Uncategorized

5 PPC Strategies That Actually Boost Conversions in 2026 via @sejournal, @CallRail

Introduction: The PPC Landscape in 2026 The digital advertising landscape has undergone a seismic shift over the last few years. As we move through 2026, the traditional methods of “set it and forget it” pay-per-click (PPC) management are officially obsolete. We are now operating in an era defined by sophisticated artificial intelligence, the total sunsetting of third-party cookies, and a consumer base that demands hyper-relevance and instant gratification. For marketers and business owners, the challenge is no longer just about getting the highest click-through rate (CTR). It is about the quality of those clicks and the efficiency with which they convert into revenue. With rising costs per click (CPC) and increased competition across search engines and social platforms, your PPC strategy must be more than just visible—it must be surgical. To thrive in this environment, brands must leverage deep data integration, automated creative workflows, and a profound understanding of user intent. In this guide, we explore five definitive PPC strategies that are driving actual, measurable conversions in 2026. These strategies move beyond the basics of keyword bidding, focusing instead on the holistic ecosystem of the modern buyer’s journey. 1. Harnessing Predictive AI and Intent-Based Audience Modeling By 2026, the focus of PPC has shifted from matching keywords to predicting intent. In the past, advertisers spent hours meticulously refining negative keyword lists and testing phrase match variations. Today, the platforms’ internal algorithms—powered by advanced neural networks—have become so adept at understanding user behavior that “broad match” is often more effective than “exact match,” provided it is fed the right signals. Moving from Keywords to Signals The most successful PPC campaigns in 2026 rely on intent-based audience modeling. Instead of targeting someone searching for “best running shoes,” modern AI allows you to target a user who has recently visited fitness blogs, tracked a run on a smartwatch, and searched for local marathon dates. This holistic view of the user profile is what drives conversions. Predictive Bidding for High-Value Conversions Smart Bidding has evolved into Predictive Bidding. Algorithms now analyze thousands of signals in real-time—such as the time of day, device type, location, and even local weather—to determine the likelihood of a conversion. The strategy here is to move away from “Maximize Conversions” and toward “Maximize Conversion Value.” By assigning different values to different actions (e.g., a newsletter sign-up vs. a completed purchase), you allow the AI to prioritize budget for the users most likely to generate high lifetime value. 2. Leveraging First-Party Data and Call Intelligence Integration With the death of third-party cookies and the tightening of privacy regulations like GDPR and CCPA, the reliance on platform-provided data is no longer enough. The most successful advertisers in 2026 are those who own their data. This is where first-party data and call intelligence tools, such as CallRail, become indispensable. Closing the Offline-to-Online Gap For many industries—such as healthcare, legal, home services, and B2B software—the conversion often happens offline via a phone call. If your PPC data only tracks form fills, you are missing half the picture. Integrating call tracking software into your PPC stack allows you to attribute a specific phone call back to the exact ad, keyword, and campaign that triggered it. In 2026, this integration is seamless. When a prospect calls your business, AI-driven conversation intelligence analyzes the call in real-time, identifying keywords and sentiment to determine if the lead was “qualified.” This data is then fed back into Google Ads or Meta Ads as a conversion signal, teaching the algorithm exactly what a “good” lead looks like. This feedback loop is the secret weapon for boosting ROI in high-touch industries. Building Privacy-Safe Customer Lists First-party data is the fuel for modern PPC. By uploading hashed customer lists into your ad platforms, you can create “Enhanced Conversions” and “Predictive Lookalikes.” This allows the ad platforms to find new users who mirror the behaviors of your highest-paying customers, all while staying compliant with modern privacy standards. 3. Scaling Hyper-Personalized Creative with Generative AI In 2026, the “creative” is the new targeting. As ad platforms automate more of the technical backend, the primary lever left for human marketers is the quality and relevance of the ad copy and imagery. However, manual creative production cannot keep up with the demand for personalization. The Rise of Dynamic Creative Optimization (DCO) Generative AI has revolutionized how we approach ad assets. Modern PPC strategies utilize Dynamic Creative Optimization to serve thousands of variations of an ad to different segments of the audience. For example, a travel brand can automatically generate different background images and headlines based on the user’s current location or past travel history. The Human-AI Collaboration While AI generates the variations, the strategy remains human. The key to boosting conversions in 2026 is ensuring that your brand voice remains consistent. Marketers are now “Creative Directors” of the AI, setting the guardrails for tone, style, and brand ethics. Ads that feel personal and authentic—rather than generic and computer-generated—are the ones that cut through the noise and drive action. Video Content at Scale Video is no longer optional for PPC. With the dominance of YouTube Shorts, TikTok, and Instagram Reels, short-form video ads have become the highest-converting asset type. Using AI tools to turn static product images into high-energy video ads allows brands to maintain a presence across all placements without the traditional costs of video production. 4. Omnichannel Synergy: Search, Social, and Retail Media The customer journey in 2026 is messy and fragmented. A user might discover a product on TikTok, search for reviews on Google, and finally purchase it through an Amazon ad. PPC strategies that operate in silos are destined to fail. To boost conversions, you must implement a cross-channel strategy that treats the web as a single ecosystem. Breaking Down the Silos Omnichannel synergy means that your search ads and social ads are talking to each other. For instance, if a user clicks a Search ad but doesn’t convert, they should immediately be entered into a retargeting sequence on social media that

Uncategorized

Google launches more visible links in AI Overviews and AI Mode

The landscape of digital search is undergoing its most significant transformation since the invention of the crawler. Google, the undisputed leader in the search engine market, has officially taken another step toward bridging the gap between artificial intelligence and the open web. The tech giant recently announced and rolled out a significant update to its AI-driven search features: the introduction of more visible, interactive links within AI Overviews and the dedicated AI Mode. This update is not merely a cosmetic change; it represents a fundamental shift in how Google balances the utility of generative AI with its responsibility to the broader ecosystem of publishers, creators, and businesses. For months, the SEO community and digital publishers have expressed concerns that AI-generated summaries would lead to a “zero-click” reality, where users get all the information they need from the search results page without ever visiting the source website. These new link cards and enhanced icons appear to be Google’s direct answer to those concerns. Understanding the New AI Link Experience The update focuses on two primary areas: AI Overviews (formerly known as the Search Generative Experience or SGE) and the newer, more immersive AI Mode. The primary goal is to make citations more prominent and to reduce the friction required for a user to move from an AI-generated summary to a deep-dive article on a publisher’s site. Desktop Innovation: Hoverable Contextual Link Cards On desktop devices, Google has introduced a “hover” state for links cited within AI responses. When a user navigates their cursor over a specific citation or link within the AI Overview, a pop-up window or “link card” automatically appears. This card isn’t just a simple URL; it is a rich preview of the destination page. These contextual overlays typically include the website’s name, a prominent favicon or brand icon, and more descriptive details about the content. By providing this preview, Google allows users to verify the credibility of the source at a glance. It also acts as a “mini-landing page” that can entice the user to click through if the preview suggests the source contains the specific nuance or detail the AI summary might have missed. Mobile and Cross-Platform Enhancements: Prominent Icons While the hover functionality is specific to the desktop experience (where cursors allow for such interactions), the visual update extends to mobile as well. Google has overhauled the way link icons are displayed within the AI responses across all devices. These icons are now larger, more colorful, and more descriptive. Instead of being tucked away at the bottom or hidden behind a dropdown menu, the links are integrated directly into the flow of the AI’s answer. This “prominent” placement ensures that even on smaller screens, the user is constantly aware that the information they are reading is sourced from the live web. It transforms the AI response from a monolithic block of text into a collaborative directory of resources. The Official Word from Google The rollout was confirmed by Robby Stein, a high-ranking executive at Google, who shared the news via a post on X (formerly Twitter). Stein highlighted that the update was designed specifically to facilitate deeper exploration of the web. He noted that in the new AI Mode, groups of links will automatically appear in pop-ups on desktop, allowing users to “jump right into a website to learn more.” Perhaps most importantly for SEOs and publishers, Stein revealed that Google’s internal testing showed this new user interface (UI) is significantly more engaging than the previous iteration. According to Google, these changes make it easier for users to access “great content across the web,” implying that the click-through rates (CTR) for these links may be higher than what was seen in earlier beta versions of AI Overviews. The Evolution of AI Overviews and AI Mode To understand why this update is so critical, we must look at the trajectory of Google’s AI integration. AI Overviews began as an experimental feature in Search Labs. Initially, citations were often criticized for being “hidden” behind expandable carousels or placed at the very end of long AI-generated responses. Publishers feared that their content was being used to train the model and answer queries, while the traffic that traditionally sustained them was being diverted. AI Mode, on the other hand, represents Google’s move toward a chat-based search interface, similar to competitors like Perplexity or OpenAI’s SearchGPT. In this mode, the conversation is more fluid. By integrating highly visible links into this conversational flow, Google is attempting to maintain its identity as a “portal” to the web rather than just an “answer engine.” Why Increased Link Visibility Matters for SEO For search engine optimization professionals, this update is a double-edged sword that leans toward optimism. The increased visibility of links is a clear signal that Google is listening to the feedback of the publishing industry. Here is why this shift is significant for the SEO landscape: 1. Boosting Click-Through Rates (CTR) In the early days of AI search, many feared that the CTR for organic results would plummet. While AI Overviews do take up significant “above the fold” real estate, the introduction of rich link cards means that being cited as a source is now more valuable than ever. A well-designed favicon and a clear, descriptive site name can now act as a brand advertisement right within the AI result. 2. The Importance of Brand Authority Since the new link cards pull prominent details about a website, brand authority becomes even more vital. If a user hovers over a link and sees a recognized, trusted brand name in the pop-up card, they are much more likely to click. This reinforces the importance of Google’s E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness) guidelines. 3. Real-Time Citation Value Google’s AI Mode and Overviews are increasingly used for “discovery” queries—topics where the user is looking for recommendations, reviews, or explanations. By making links more visible, Google is encouraging a “discovery” behavior. Users might start with the AI to get the gist of a topic and

Uncategorized

The Classifier Layer: Spam, Safety, Intent, Trust Stand Between You And The Answer via @sejournal, @DuaneForrester

The Fundamental Shift in Modern Search Architecture For decades, the world of Search Engine Optimization (SEO) operated under a relatively straightforward paradigm: indexation, crawling, and ranking. If a website was technically sound and possessed enough backlink authority, it could generally expect to climb the search engine results pages (SERPs). However, the rise of Generative AI and Large Language Models (LLMs) has introduced a sophisticated new gatekeeper that sits between a website’s content and the end user. This gatekeeper is known as the Classifier Layer. As industry experts like Duane Forrester have noted, visibility in the era of AI-driven answers is no longer just about being “better” than a competitor. It is about passing a rigorous series of automated tests designed to ensure that only the most helpful, safe, and relevant information reaches the user. Before an AI even considers ranking your content, it must first decide if your content is allowed to exist within its response framework. This article explores the four pillars of the Classifier Layer—Spam, Safety, Intent, and Trust—and how they dictate the future of digital visibility. Understanding the Classifier Layer In traditional search, a query triggers a retrieval from an index. In AI-powered search (such as Google’s AI Overviews, Perplexity, or ChatGPT), the process is far more complex. The system uses “classifiers”—machine learning models specifically trained to categorize and filter data—to evaluate information in real-time. These classifiers act as a sieve. If your content fails at the classifier level, it is essentially invisible. It doesn’t matter if your keyword density is perfect or if your site loads in under a second. If the classifier flags your page as untrustworthy or irrelevant to the safety guardrails of the AI, it will never be synthesized into an AI-generated answer. To survive this shift, marketers and creators must understand the specific criteria these classifiers are looking for. The First Gate: Spam and the Battle for Quality Spam detection has evolved significantly from the days of simple keyword stuffing and hidden text. Modern spam classifiers are powered by neural networks that can identify “thin” content, programmatic junk, and low-effort AI-generated text that offers no unique value. The goal of the spam classifier is to protect the integrity of the AI’s knowledge base. When an AI engine processes a query, it looks for high-signal information. Spam classifiers are designed to weed out high-noise content. This includes content that exists solely to capture search traffic without providing a genuine solution to a problem. If a website publishes thousands of pages of “filler” content designed to rank for long-tail keywords, the spam classifier will likely flag the entire domain, preventing any of its pages from being used as a source for an AI answer. To pass this gate, content must demonstrate human-centric utility. This means moving away from generic summaries and toward original reporting, unique insights, and comprehensive data that cannot be easily replicated by a basic prompt. The Second Gate: Safety and Policy Guardrails Safety is perhaps the most rigid of the four classifiers. Tech companies providing AI answers are under immense pressure to prevent their models from generating harmful, illegal, or biased content. Consequently, safety classifiers are exceptionally sensitive. They are programmed to block any content that might lead to a “hallucination” that could cause real-world harm. The safety classifier looks for content that violates specific policy guidelines, including: Medical Misinformation: Advice that contradicts established scientific consensus. Financial Harm: High-risk financial advice without proper licensing or context. Dangerous Activities: Content that encourages or explains how to perform illegal or harmful acts. Hate Speech and Harassment: Any language that could be interpreted as discriminatory or aggressive. For businesses in the “Your Money or Your Life” (YMYL) sectors, the safety gate is the most difficult to clear. If your content deals with health, wealth, or safety, it undergoes a much higher level of scrutiny. The classifier layer will prioritize sources that are recognized as “safe” and authoritative, often discarding newer or more controversial voices to minimize the risk of the AI providing a dangerous answer. The Third Gate: Decoding User Intent In the past, intent was often categorized into simple buckets: informational, navigational, or transactional. While those categories still matter, the AI intent classifier is much more nuanced. It uses semantic understanding to determine whether a piece of content actually solves the specific problem the user is facing at that exact moment. The intent classifier asks: “Does this content provide the most direct and useful path to the user’s goal?” If a user asks “how to fix a leaky faucet,” the intent classifier will prioritize content that provides a step-by-step guide, a list of tools, and potential pitfalls. It will deprioritize a 2,000-word blog post that spends the first 800 words discussing the history of indoor plumbing. The rise of AI search means that fluff is a liability. The classifier layer is designed to extract the “meat” of the content. If the core intent is buried under layers of SEO-driven filler, the classifier may fail to recognize the value of the page, leading to a loss in visibility. Optimization now requires a laser-like focus on answering the user’s query as efficiently as possible. The Fourth Gate: The Trust Layer and Authority Trust is the final, and perhaps most significant, barrier. In a world where AI can generate text that looks professional but may be factually incorrect, “Trust” has become the primary currency of the internet. The trust classifier evaluates the reputation of the source, the credentials of the author, and the historical accuracy of the domain. This is where E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness) moves from being a guideline to a technical requirement. The trust classifier checks for: Source Verification: Does this site have a history of being cited by other reputable organizations? Authorial Expertise: Who wrote this? Are they a recognized expert in their field? Factual Consistency: Does the information provided align with known facts across the web, or is it an outlier? Transparency: Is the site clear about its ownership,

Uncategorized

Airbnb says traffic from AI chatbots converts better than Google

The Shifting Landscape of Digital Discovery The digital marketing world was recently shaken by a revelation from one of the industry’s most influential leaders. During Airbnb’s Q4 2025 earnings call on February 12, CEO Brian Chesky shared a data point that confirms what many tech analysts have suspected: the era of search engine dominance is facing a significant challenge from generative AI. According to Chesky, traffic arriving at Airbnb via AI chatbots is converting at a higher rate than traffic originating from Google. This statement marks a pivotal moment in the evolution of the internet. For over two decades, Google has been the undisputed gatekeeper of the web, serving as the primary funnel for discovery and commerce. However, the rise of conversational interfaces—powered by Large Language Models (LLMs)—is beginning to rewire how consumers find what they are looking for. While Chesky did not provide specific conversion percentages or exact traffic volumes, the qualitative trend is clear: users who interact with AI before landing on a booking page are more likely to complete a transaction. Why AI Chatbot Traffic Outperforms Traditional Search To understand why a visitor from ChatGPT or Claude might convert better than one from a standard Google search, we have to look at the “intent” behind the click. Traditional search engines often require the user to do the heavy lifting. A traveler might type “best beach houses in Mexico” into Google and then spend an hour sifting through ten different tabs, comparing prices, amenities, and locations. In contrast, AI chatbots act as a discovery layer that handles the synthesis of information before the user ever clicks a link. By the time a user asks an AI to “find a quiet villa in Tulum with a private pool and high-speed Wi-Fi for under $300 a night” and receives a specific recommendation, the discovery phase is largely complete. The click-through to Airbnb is no longer an act of exploration; it is an act of execution. The user isn’t browsing; they are arriving ready to book. The Qualified Lead Advantage This phenomenon aligns with predictions made by tech giants like Microsoft and Google itself. Both companies have suggested that while AI search might lead to a lower volume of total clicks compared to traditional search, the clicks that do occur will be of significantly higher quality. For a business like Airbnb, this is an ideal scenario. High-volume, low-intent traffic often leads to high bounce rates and increased server costs without a corresponding increase in revenue. High-intent traffic from AI assistants allows for a more efficient sales funnel. The Key Players: ChatGPT, Gemini, and Claude During the earnings call, Chesky referenced a variety of AI platforms, including OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude. He framed these not as competitors that might “disintermediate” or hide Airbnb from the user, but rather as powerful acquisition partners. The diversity of the AI landscape is a benefit to platforms like Airbnb, as it prevents a single entity from monopolizing the discovery phase. Chesky positioned these chatbots as “top-of-funnel discovery engines.” He noted that they are fundamentally similar to search in their objective—connecting a user with information—but superior in their ability to understand nuance and context. As these models become more sophisticated, they will likely become the primary starting point for complex planning tasks, such as organizing a multi-city vacation or finding niche accommodations that match specific lifestyle needs. Airbnb’s Internal AI Evolution: From Search to Knowing the User While external AI chatbots are driving high-converting traffic to the site, Airbnb is also aggressively integrating AI into its own architecture. Chesky’s vision for the future of the platform is “AI-native.” This means the app will eventually move beyond a simple search bar and become a personalized concierge that “knows you.” Conversational Search Within the App Airbnb is currently testing an internal, AI-powered conversational search feature. Rather than a wide-scale rollout, the company is following a philosophy of rapid iteration. Currently, this AI search is live for a very small percentage of traffic, allowing the engineering team to gather data and refine the experience in real-time. The goal is to make the search process feel like a conversation with a travel expert rather than a database query. The Operational Power of AI Agents The impact of AI at Airbnb isn’t limited to the front-end user experience; it is also transforming the company’s operations. Chesky revealed that Airbnb’s in-house AI customer service agent is already resolving nearly one-third (30%) of North American support tickets without any human intervention. Currently, this tool is English-only, but the company has ambitious plans to roll out multilingual support and voice-based AI assistance globally. Chesky set a high bar for the coming year, stating that the goal is for AI to handle “significantly more than 30%” of tickets. By automating routine inquiries—such as booking modifications, cancellation policy clarifications, or basic troubleshooting—Airbnb can free up its human support staff to handle more complex and sensitive issues, ultimately improving the overall guest and host experience. The Strategic Shift Away from Performance Marketing Airbnb’s embrace of AI discovery is consistent with its broader marketing strategy over the last few years. Long before the public release of ChatGPT, Airbnb began shifting its budget away from traditional performance marketing—specifically Google search ads—and toward brand marketing. The company bet that building a strong, recognizable brand would be more sustainable than constantly paying for the top spot on a Google results page. This move appears prescient in the context of the AI revolution. If discovery is moving away from the “ten blue links” of Google and toward personalized AI recommendations, brand equity becomes more important than ever. If an AI is asked for a “vacation rental,” you want the AI to think of “Airbnb” as the synonymous term for that category. The Future of Advertising in an AI World One of the biggest questions facing the tech industry is how monetization will work in a world dominated by AI chatbots. On the earnings call, Chesky addressed the prospect of

Uncategorized

TikTok launches AI-powered ad options for entertainment marketers

The Evolution of Entertainment Discovery on TikTok The digital landscape for entertainment marketing is undergoing a seismic shift. For years, traditional media buys—billboards, television spots, and standard pre-roll ads—were the primary drivers of box office sales and streaming subscriptions. However, the rise of short-form video has fundamentally altered how audiences discover, discuss, and decide what to watch. TikTok has emerged as the epicenter of this transformation, evolving from a simple video-sharing app into a massive engine for cultural influence. Recognizing its own power in the entertainment sector, TikTok has officially launched a suite of AI-powered advertising options specifically designed for entertainment marketers in the European market. This strategic rollout is not merely a technical update; it is a response to the way modern consumers interact with media. TikTok users do not just consume content; they participate in it. When a new series drops or a film hits theaters, the conversation happens in real-time through memes, reaction videos, and theory breakdowns. By integrating advanced artificial intelligence into its ad stack, TikTok is providing marketers with the tools to insert themselves into these organic conversations with surgical precision. These new tools are designed to bridge the gap between “scrolling” and “watching,” turning passive viewers into active subscribers and ticket buyers. Advanced AI-Driven Ad Formats: A Closer Look The core of this launch centers on two distinct ad types: Streaming Ads and New Title Launch. Both formats leverage TikTok’s proprietary AI algorithms to ensure that the creative assets are served to the users most likely to engage with them. By moving away from broad demographic targeting and toward behavior-based, intent-driven modeling, these ads represent the next generation of digital performance marketing. Streaming Ads: Personalization at Scale For streaming platforms, the challenge has always been discovery. In a world of “infinite scroll” and “content fatigue,” getting a user to commit to a new series is a hurdle. TikTok’s new Streaming Ads are built to solve this by using AI to show personalized content based on a user’s specific engagement history. These are not static banners; they are dynamic, data-driven units that adapt to the viewer. Marketers can choose from two primary formats within the Streaming Ads category. The first is a four-title video carousel. This allows a streaming service to showcase a variety of its library in a single ad unit, letting the AI determine which titles are featured based on what the user has previously interacted with. If a user frequently engages with true crime creators, the AI can prioritize the platform’s latest documentary series in the carousel. The second format is a multi-title media card, which offers a more cinematic, expansive view of a platform’s offerings, ideal for brand awareness and deep-linking into specific app categories. New Title Launch: Driving High-Intent Conversions While Streaming Ads focus on the breadth of a library, the New Title Launch format is built for the “big event.” Whether it is a blockbuster film premiere, a highly anticipated season finale, or a live ticketed event, this format is designed to capture high-intent users. The AI analyzes signals such as genre preference, past engagement with similar franchises, and even price sensitivity to identify users who are on the verge of making a purchase or a long-term commitment. This format is particularly effective for entertainment brands looking to convert cultural hype into measurable results. By targeting users who have already shown interest in a specific genre or actor, the New Title Launch ad minimizes wasted spend and maximizes the conversion rate for ticket sales or new subscriptions. It turns the platform’s viral energy into a structured funnel for entertainment ROI. The Data Behind the Strategy: Why Entertainment Marketers Are Moving to TikTok The decision to launch these tools in Europe is backed by staggering internal data that highlights TikTok’s dominance in the entertainment space. According to TikTok’s own research, 80% of its users state that the platform directly influences their streaming and movie-going choices. This isn’t just a “social” platform anymore; it is a recommendation engine that rivals the algorithms of the streaming services themselves. The sheer volume of entertainment-related content on the platform is unprecedented. In 2025, an average of 6.5 million daily posts were shared about film and television on TikTok. This massive data set provides the AI with a wealth of information to learn from. Every like, share, and “watch time” metric on a fan-made video serves as a signal that the AI uses to refine its ad targeting. Furthermore, the correlation between TikTok trends and commercial success is undeniable. Last year, 15 of the top 20 European box office films were viral hits on TikTok before or during their theatrical runs. This indicates that a movie’s success is increasingly tied to its ability to gain traction within the TikTok ecosystem. Strategic Timing: The Berlinale International Film Festival The rollout of these AI-powered ad options coincides with the 76th Berlinale International Film Festival, one of the most prestigious events in the global film calendar. By launching during Berlinale, TikTok is sending a clear message to the industry: it is no longer just a place for “user-generated content,” but a sophisticated partner for the highest levels of the film and television industry. Europe represents a diverse and complex market for entertainment marketers, with varying languages, cultural preferences, and viewing habits. The AI-driven nature of these new ads is particularly useful in this context, as it allows for localized targeting without the need for massive manual campaign management. The AI can identify which creative assets resonate in Germany versus France or Spain, optimizing the campaign in real-time to suit the specific nuances of each regional audience. How AI Enhances the Creative Process for Marketers One of the most significant benefits of AI-powered advertising is its ability to reduce the friction between creative production and distribution. In the past, marketers had to guess which trailer or clip would perform best with a specific audience. TikTok’s AI eliminates much of this guesswork through automated testing and optimization. When a

Uncategorized

Meta adds Manus AI tools into Ads Manager

The Evolution of Meta Ads Manager: Introducing Manus AI Integration The landscape of digital advertising is undergoing its most significant transformation since the invention of the tracking pixel. Meta Platforms, the parent company of Facebook and Instagram, has officially begun integrating Manus AI tools directly into its Ads Manager ecosystem. This move marks a pivot from experimental generative AI—like creating image variations or writing ad copy—to “agentic” AI, which is designed to handle complex workflows, perform research, and generate deep-dive reports autonomously. For years, advertisers have navigated a dashboard that, while powerful, often required significant manual labor to extract meaningful insights. The introduction of Manus AI into the Ads Manager workflow is intended to bridge the gap between raw data and actionable strategy. By embedding these tools into the everyday interface of performance marketers, Meta is signaling a future where the platform acts less like a static tool and more like an intelligent partner. What is Manus AI and Why Did Meta Integrate It? Manus AI represents a new frontier in artificial intelligence: the AI agent. Unlike standard large language models (LLMs) that focus on generating text based on prompts, agentic AI is designed to execute multi-step tasks. In the context of Meta Ads, this means the AI doesn’t just answer questions about your data; it can proactively organize that data, cross-reference it with market trends, and produce a comprehensive analysis without the user having to click through dozens of tabs. Meta’s acquisition and subsequent integration of Manus AI technology are strategic responses to the massive capital expenditures the company has funneled into AI research and development. Mark Zuckerberg has been transparent about the company’s “AI-first” pivot, but investors have remained focused on one core question: How will this spend translate into revenue? By placing Manus AI into the hands of advertisers—the primary source of Meta’s income—the company is creating a direct link between its AI innovations and advertising performance. Key Features: Automation for Research and Reporting The rollout of Manus AI tools within Ads Manager focuses on three primary pillars: research, reporting, and workflow automation. While the rollout is currently hitting select accounts through in-stream prompts and the “Tools” menu, the capabilities are already defining a new standard for ad management. Streamlined Report Building One of the most time-consuming aspects of being a digital marketer is reporting. Traditionally, this involves exporting CSV files, creating pivot tables, and manually identifying which creative assets or audience segments are driving the best return on ad spend (ROAS). Manus AI aims to automate this entire pipeline. Advertisers can now use the AI agent to build custom reports that highlight specific KPIs or compare campaign performance across different timeframes with minimal manual input. The agent understands the context of the data, allowing it to highlight anomalies or successes that a human eye might miss during a quick scan. Advanced Audience Research Understanding who is interacting with your ads is just as important as the ads themselves. Manus AI tools are built to perform deep audience research within the Ads Manager environment. By analyzing historical data and current market signals, the AI can suggest new audience segments that align with an advertiser’s goals. This goes beyond the “Advantage+” automated targeting Meta already offers; it provides the *why* behind the targeting, giving marketers the insights they need to refine their creative strategy. In-Workflow Assistance The integration is designed to be non-intrusive yet highly accessible. Many users are now seeing pop-up alerts and prompts that encourage them to activate Manus AI while they are in the middle of setting up a campaign. This “in-workflow” adoption strategy ensures that the AI is used at the point of greatest need—when a marketer is actually making decisions about budget, targeting, or creative direction. The Strategic Shift: From Generative AI to Agentic AI To understand why the addition of Manus AI is so significant, one must look at the broader context of Meta’s AI-driven advertising system. For the past year, Meta has focused heavily on tools like “Andromeda” and “GEM” (Generative AI for Marketing). These tools were largely focused on the “front end” of advertising—generating images, expanding backgrounds, and testing different headline variations. Manus AI represents the “back end” evolution. It is less about the visual appearance of the ad and more about the intelligence that powers the campaign. This shift toward agentic AI is a recognition that the bottleneck for many advertisers is no longer creating the ad itself, but managing the complexity of the data and the logistics of the campaign. By automating the research and analysis phases, Meta is lowering the barrier to entry for small businesses while providing enterprise-level tools to large agencies. Why This Matters for Performance Marketers The digital advertising industry is currently caught between increasing privacy restrictions (such as Apple’s ATT and the sunsetting of third-party cookies) and the need for higher precision in targeting. In this environment, the only way to maintain performance is through better data utilization. Manus AI provides that bridge. Efficiency Gains and Time Savings For agency owners and in-house marketing teams, time is the most valuable resource. The ability to delegate “grunt work”—like data cleaning and basic report generation—to an AI agent allows human talent to focus on high-level strategy and creative innovation. If Manus AI can reduce the time spent on reporting by even 20%, it equates to hundreds of hours saved across an organization over the course of a year. Faster Optimization Cycles In the world of paid social, speed is a competitive advantage. The faster you can identify that a creative trend is dying or that a specific demographic is over-indexing on cost-per-click (CPC), the faster you can pivot your budget. Manus AI’s real-time reporting capabilities mean that these insights are delivered as they happen, rather than at the end of a weekly or monthly reporting cycle. This enables a more agile approach to budget management. Evidence-Based Decision Making Subjectivity is the enemy of performance marketing. Advertisers often fall into the trap of following a

Uncategorized

Google shifts Lookalike to AI signals in Demand Gen

The Evolution of Audience Targeting in Demand Gen Google is fundamentally restructuring how advertisers reach new customers within its Demand Gen campaigns. In a significant move toward an AI-first ecosystem, Google has announced that Lookalike segments will transition from strict targeting constraints to optimization signals. Scheduled to take full effect by March 2026, this shift represents a departure from the traditional “walled garden” approach to audience building, favoring a more fluid, machine-learning-driven model. Demand Gen campaigns, which replaced Discovery Ads, are designed to capture interest across Google’s most visual and immersive surfaces, including YouTube (Shorts, In-stream, and Feed), Google Discover, and Gmail. Central to these campaigns has been the “Lookalike” segment—a tool that allows advertisers to upload a seed list of existing customers and ask Google to find similar users. Under the new update, the role of that seed list is changing from a hard boundary into a directional compass. The Technical Shift: From Constraints to Signals To understand the weight of this update, it is essential to distinguish between a “constraint” and a “signal.” In the legacy version of Lookalike targeting, advertisers selected a similarity tier: Narrow (top 2.5% of similarity), Balanced (top 5%), or Broad (top 10%). The algorithm was strictly bound to these percentages. If a user fell outside that specific similarity pool, they would not see the ad, regardless of how likely they were to convert at that specific moment. Starting in March 2026, these tiers will act as “optimization signals.” This means that while Google’s AI will prioritize the users within those defined similarity pools, it is no longer forbidden from venturing outside of them. If the system’s predictive modeling identifies a user who is highly likely to convert but technically falls outside the “Broad” 10% similarity tier, the system can now serve an ad to that user. This transition effectively reframes the Lookalike segment. It is no longer a fence that keeps the campaign within a specific yard; it is a signal that tells the AI where to start looking, while granting it the autonomy to follow the scent of a conversion wherever it leads. Comparing the Before and After Models The practical implications for digital marketers are vast. Let’s break down the structural differences between the two models to better understand the impact on day-to-day campaign management. The Legacy Model (Pre-March 2026) Under the old system, advertisers had a high degree of predictability regarding who would see their ads. By choosing a “Narrow” tier, a brand could ensure that their budget was spent only on the users most mathematically similar to their existing customer base. This was ideal for niche products or brands with very specific buyer personas. However, the downside was a “scale ceiling.” Once the system exhausted the high-intent users within that narrow pool, performance would often plateau or costs-per-acquisition (CPA) would spike as the system struggled to find more conversions within a limited set of users. The New Signal-Based Model In the new model, the tiers still exist, but they function as a weighted priority. The AI uses the Lookalike list as a high-quality data source to understand the characteristics of a “good” customer. However, it combines this with real-time intent signals—such as recent search history, app usage, and video consumption—to find conversions that a strict similarity model might miss. This approach is designed to maximize conversion volume and lower the average CPA by allowing the algorithm to bypass the artificial boundaries of a percentage-based list. The Synergy with Optimized Targeting A critical component of this update is how it interacts with Google’s existing “Optimized Targeting” feature. Optimized Targeting is a setting that allows Google to look beyond your selected audience segments to find conversions you may have missed. When Lookalike segments become signals, they will stack with Optimized Targeting to create a powerful, albeit less transparent, engine for growth. If an advertiser enables both, the Lookalike signal provides the “who,” while Optimized Targeting provides the “how and when” for expansion. This layering allows Google’s AI to pursue a broader reach while still keeping the campaign anchored in the brand’s first-party data. For performance marketers, this means the system has more freedom than ever to pursue the most efficient conversions across the entire Google network. Why Google is Moving Toward AI Signals The shift toward signal-based targeting is not an isolated event; it is part of a broader industry trend toward “Black Box” advertising. Several factors are driving Google to make this change, ranging from technical necessity to performance optimization. 1. Overcoming the Scale Cap Strict Lookalike targeting often leads to diminishing returns. As campaigns mature, they frequently hit a wall where they cannot find new users within the narrow similarity pool. By converting these pools into signals, Google allows the campaign to scale more naturally. This is particularly important for Demand Gen campaigns, which are designed to sit at the top and middle of the marketing funnel, where high volume is a primary goal. 2. Navigating a Cookie-Less Future The digital advertising landscape is moving away from granular tracking and third-party cookies. As traditional tracking becomes less reliable, Google is leaning into “modeled behavior.” AI signals allow the system to use aggregated, anonymized data to predict behavior rather than relying on individual tracking. This makes the platform more resilient to privacy changes and browser-level tracking preventions. 3. Reducing Model Complexity Maintaining high-quality similarity models for every single advertiser is a massive computational task. By shifting to a more generalized AI suggestion model, Google can streamline its internal processing while potentially delivering better results for the advertiser through a more holistic view of user intent. Strategic Implications: What Advertisers Need to Do For brands and agencies, the move to signal-based Lookalikes requires a shift in strategy. The focus is moving away from “who we target” and toward “what data we feed the machine.” Prioritize High-Quality First-Party Data Because the Lookalike segment is now a signal, the quality of that signal is more important than ever. Advertisers should focus on

Uncategorized

Google’s Jeff Dean: AI Search relies on classic ranking and retrieval

In the rapidly evolving landscape of artificial intelligence, there is a common misconception that the advent of Large Language Models (LLMs) has completely rewritten the rules of information retrieval. Many observers assume that Google’s transition toward AI-driven results, such as AI Overviews, represents a total abandonment of the “old” search algorithms that have governed the web for decades. However, according to Jeff Dean, Google’s Chief AI Scientist, the reality is far more grounded in tradition than many realize. In a detailed interview on the Latent Space: The AI Engineer Podcast, Dean pulled back the curtain on the architecture powering Google’s modern AI search experiences. His insights reveal a critical truth for developers, SEO professionals, and tech enthusiasts: AI search is not a replacement for classic search infrastructure. Instead, it is a sophisticated layer that sits on top of a foundational system built on decades of ranking, retrieval, and indexing expertise. The Architecture: Filter First, Reason Last The core of Jeff Dean’s explanation centers on a concept that might surprise those who view AI as an all-knowing entity that “reads” the entire internet in real-time. He clarified that Google’s AI systems do not process the whole web simultaneously for every query. Instead, they follow a rigorous, multi-stage pipeline designed for efficiency and accuracy. Dean describes this as a “staged pipeline” that prioritizes filtering before any generative reasoning occurs. Visibility in an AI-generated search result still depends entirely on a document’s ability to clear traditional ranking thresholds. If a piece of content does not make it into the broad candidate pool of search results through standard SEO and ranking signals, it has zero chance of being used by an LLM to synthesize an answer. In essence, the AI doesn’t find the content; the search engine finds the content, and the AI merely explains it. The Candidate Pool: From Trillions to Thousands To understand how this works at scale, we must look at the numbers Dean provided. The internet consists of trillions of tokens—fragments of data that make up the web. When a user enters a query, it is computationally impossible and wildly inefficient for a high-reasoning LLM to scan those trillions of tokens to find an answer. Instead, Google uses “lightweight methods”—the classic retrieval systems—to narrow the field. This first pass identifies a subset of roughly 30,000 documents that are potentially relevant to the user’s intent. This initial culling is done in milliseconds using traditional signals. Dean explained that this process is about “down-ranking” the noise to find a manageable set of “interesting tokens.” Reranking and Refining Once the system has identified the top 30,000 candidates, it doesn’t stop there. Google applies increasingly sophisticated algorithms and signals to refine that list further. This is a tiered process where the cost of computation increases as the number of documents decreases. The system filters the 30,000 documents down to a few hundred, and eventually down to the final set—often around 10 to 100 documents—that are truly relevant to the specific task. Dean refers to the user experience of AI search as an “illusion” of attending to the entire web. While it feels like the AI is searching the whole internet for you, it is actually only “paying attention” to the very small subset of data that the traditional ranking engine has already verified as high-quality and relevant. “You’re going to want to identify what are the 30,000-ish documents… and then how do you go from that into what are the 117 documents I really should be paying attention to?” Dean noted. Matching Intent: Moving from Keywords to Meaning One of the most significant shifts in search over the last several years has been the move from lexical matching (finding exact words) to semantic matching (understanding the meaning behind words). While LLMs have accelerated this trend, Dean pointed out that this evolution is not entirely new; it is a continuation of a journey Google started long ago. In the early days of search, if a user typed “blue suede shoes,” the engine looked for pages that contained those exact three words. If a page used the phrase “azure leather footwear,” it might not show up, even though it was contextually identical. Today, thanks to LLM-based representations of text, Google can move beyond “hard” word overlap. The Power of Topic Overlap Dean explained that LLMs allow Google to evaluate whether a page—or even a specific paragraph within a page—is topically relevant to a query, even if the wording differs entirely. This shift places a premium on topical authority and comprehensive coverage. For content creators, this means that repeating a keyword five times is far less effective than explaining a concept so clearly that the system understands the subject matter’s intent. This “softening” of the definition of a query allows Google to bridge the gap between how people think and how they type. By using LLM representations, the search engine can map the “meaning” of a query to the “meaning” of a document, creating a much more fluid and intuitive discovery process. The 2001 Milestone: Why Query Expansion Changed Everything To provide context for today’s AI advancements, Jeff Dean took a trip down memory lane to 2001. This was a pivotal year for Google, marking the moment when the company moved its entire index from physical disks into RAM (memory) across a massive fleet of machines. Before 2001, adding extra terms to a user’s query was expensive. Every time Google wanted to look for a synonym, it required a “disk seek,” which added latency and slowed down the search for the user. Consequently, the engine had to be very selective about the terms it searched for. Query Expansion in the Pre-LLM Era Once the index was in memory, the technical constraints vanished. Google could suddenly take a three-word query from a user and “expand” it into 50 terms behind the scenes. If a user searched for “cafe,” the system could simultaneously look for “restaurant,” “bistro,” “coffee shop,” and “diner” without any performance penalty. Dean emphasized that

Scroll to Top