Author name: aftabkhannewemail@gmail.com

Uncategorized

TikTok launches AI-powered ad options for entertainment marketers

The Evolution of Entertainment Discovery on TikTok The digital landscape for entertainment marketing is undergoing a seismic shift. For years, traditional media buys—billboards, television spots, and standard pre-roll ads—were the primary drivers of box office sales and streaming subscriptions. However, the rise of short-form video has fundamentally altered how audiences discover, discuss, and decide what to watch. TikTok has emerged as the epicenter of this transformation, evolving from a simple video-sharing app into a massive engine for cultural influence. Recognizing its own power in the entertainment sector, TikTok has officially launched a suite of AI-powered advertising options specifically designed for entertainment marketers in the European market. This strategic rollout is not merely a technical update; it is a response to the way modern consumers interact with media. TikTok users do not just consume content; they participate in it. When a new series drops or a film hits theaters, the conversation happens in real-time through memes, reaction videos, and theory breakdowns. By integrating advanced artificial intelligence into its ad stack, TikTok is providing marketers with the tools to insert themselves into these organic conversations with surgical precision. These new tools are designed to bridge the gap between “scrolling” and “watching,” turning passive viewers into active subscribers and ticket buyers. Advanced AI-Driven Ad Formats: A Closer Look The core of this launch centers on two distinct ad types: Streaming Ads and New Title Launch. Both formats leverage TikTok’s proprietary AI algorithms to ensure that the creative assets are served to the users most likely to engage with them. By moving away from broad demographic targeting and toward behavior-based, intent-driven modeling, these ads represent the next generation of digital performance marketing. Streaming Ads: Personalization at Scale For streaming platforms, the challenge has always been discovery. In a world of “infinite scroll” and “content fatigue,” getting a user to commit to a new series is a hurdle. TikTok’s new Streaming Ads are built to solve this by using AI to show personalized content based on a user’s specific engagement history. These are not static banners; they are dynamic, data-driven units that adapt to the viewer. Marketers can choose from two primary formats within the Streaming Ads category. The first is a four-title video carousel. This allows a streaming service to showcase a variety of its library in a single ad unit, letting the AI determine which titles are featured based on what the user has previously interacted with. If a user frequently engages with true crime creators, the AI can prioritize the platform’s latest documentary series in the carousel. The second format is a multi-title media card, which offers a more cinematic, expansive view of a platform’s offerings, ideal for brand awareness and deep-linking into specific app categories. New Title Launch: Driving High-Intent Conversions While Streaming Ads focus on the breadth of a library, the New Title Launch format is built for the “big event.” Whether it is a blockbuster film premiere, a highly anticipated season finale, or a live ticketed event, this format is designed to capture high-intent users. The AI analyzes signals such as genre preference, past engagement with similar franchises, and even price sensitivity to identify users who are on the verge of making a purchase or a long-term commitment. This format is particularly effective for entertainment brands looking to convert cultural hype into measurable results. By targeting users who have already shown interest in a specific genre or actor, the New Title Launch ad minimizes wasted spend and maximizes the conversion rate for ticket sales or new subscriptions. It turns the platform’s viral energy into a structured funnel for entertainment ROI. The Data Behind the Strategy: Why Entertainment Marketers Are Moving to TikTok The decision to launch these tools in Europe is backed by staggering internal data that highlights TikTok’s dominance in the entertainment space. According to TikTok’s own research, 80% of its users state that the platform directly influences their streaming and movie-going choices. This isn’t just a “social” platform anymore; it is a recommendation engine that rivals the algorithms of the streaming services themselves. The sheer volume of entertainment-related content on the platform is unprecedented. In 2025, an average of 6.5 million daily posts were shared about film and television on TikTok. This massive data set provides the AI with a wealth of information to learn from. Every like, share, and “watch time” metric on a fan-made video serves as a signal that the AI uses to refine its ad targeting. Furthermore, the correlation between TikTok trends and commercial success is undeniable. Last year, 15 of the top 20 European box office films were viral hits on TikTok before or during their theatrical runs. This indicates that a movie’s success is increasingly tied to its ability to gain traction within the TikTok ecosystem. Strategic Timing: The Berlinale International Film Festival The rollout of these AI-powered ad options coincides with the 76th Berlinale International Film Festival, one of the most prestigious events in the global film calendar. By launching during Berlinale, TikTok is sending a clear message to the industry: it is no longer just a place for “user-generated content,” but a sophisticated partner for the highest levels of the film and television industry. Europe represents a diverse and complex market for entertainment marketers, with varying languages, cultural preferences, and viewing habits. The AI-driven nature of these new ads is particularly useful in this context, as it allows for localized targeting without the need for massive manual campaign management. The AI can identify which creative assets resonate in Germany versus France or Spain, optimizing the campaign in real-time to suit the specific nuances of each regional audience. How AI Enhances the Creative Process for Marketers One of the most significant benefits of AI-powered advertising is its ability to reduce the friction between creative production and distribution. In the past, marketers had to guess which trailer or clip would perform best with a specific audience. TikTok’s AI eliminates much of this guesswork through automated testing and optimization. When a

Uncategorized

Meta adds Manus AI tools into Ads Manager

The Evolution of Meta Ads Manager: Introducing Manus AI Integration The landscape of digital advertising is undergoing its most significant transformation since the invention of the tracking pixel. Meta Platforms, the parent company of Facebook and Instagram, has officially begun integrating Manus AI tools directly into its Ads Manager ecosystem. This move marks a pivot from experimental generative AI—like creating image variations or writing ad copy—to “agentic” AI, which is designed to handle complex workflows, perform research, and generate deep-dive reports autonomously. For years, advertisers have navigated a dashboard that, while powerful, often required significant manual labor to extract meaningful insights. The introduction of Manus AI into the Ads Manager workflow is intended to bridge the gap between raw data and actionable strategy. By embedding these tools into the everyday interface of performance marketers, Meta is signaling a future where the platform acts less like a static tool and more like an intelligent partner. What is Manus AI and Why Did Meta Integrate It? Manus AI represents a new frontier in artificial intelligence: the AI agent. Unlike standard large language models (LLMs) that focus on generating text based on prompts, agentic AI is designed to execute multi-step tasks. In the context of Meta Ads, this means the AI doesn’t just answer questions about your data; it can proactively organize that data, cross-reference it with market trends, and produce a comprehensive analysis without the user having to click through dozens of tabs. Meta’s acquisition and subsequent integration of Manus AI technology are strategic responses to the massive capital expenditures the company has funneled into AI research and development. Mark Zuckerberg has been transparent about the company’s “AI-first” pivot, but investors have remained focused on one core question: How will this spend translate into revenue? By placing Manus AI into the hands of advertisers—the primary source of Meta’s income—the company is creating a direct link between its AI innovations and advertising performance. Key Features: Automation for Research and Reporting The rollout of Manus AI tools within Ads Manager focuses on three primary pillars: research, reporting, and workflow automation. While the rollout is currently hitting select accounts through in-stream prompts and the “Tools” menu, the capabilities are already defining a new standard for ad management. Streamlined Report Building One of the most time-consuming aspects of being a digital marketer is reporting. Traditionally, this involves exporting CSV files, creating pivot tables, and manually identifying which creative assets or audience segments are driving the best return on ad spend (ROAS). Manus AI aims to automate this entire pipeline. Advertisers can now use the AI agent to build custom reports that highlight specific KPIs or compare campaign performance across different timeframes with minimal manual input. The agent understands the context of the data, allowing it to highlight anomalies or successes that a human eye might miss during a quick scan. Advanced Audience Research Understanding who is interacting with your ads is just as important as the ads themselves. Manus AI tools are built to perform deep audience research within the Ads Manager environment. By analyzing historical data and current market signals, the AI can suggest new audience segments that align with an advertiser’s goals. This goes beyond the “Advantage+” automated targeting Meta already offers; it provides the *why* behind the targeting, giving marketers the insights they need to refine their creative strategy. In-Workflow Assistance The integration is designed to be non-intrusive yet highly accessible. Many users are now seeing pop-up alerts and prompts that encourage them to activate Manus AI while they are in the middle of setting up a campaign. This “in-workflow” adoption strategy ensures that the AI is used at the point of greatest need—when a marketer is actually making decisions about budget, targeting, or creative direction. The Strategic Shift: From Generative AI to Agentic AI To understand why the addition of Manus AI is so significant, one must look at the broader context of Meta’s AI-driven advertising system. For the past year, Meta has focused heavily on tools like “Andromeda” and “GEM” (Generative AI for Marketing). These tools were largely focused on the “front end” of advertising—generating images, expanding backgrounds, and testing different headline variations. Manus AI represents the “back end” evolution. It is less about the visual appearance of the ad and more about the intelligence that powers the campaign. This shift toward agentic AI is a recognition that the bottleneck for many advertisers is no longer creating the ad itself, but managing the complexity of the data and the logistics of the campaign. By automating the research and analysis phases, Meta is lowering the barrier to entry for small businesses while providing enterprise-level tools to large agencies. Why This Matters for Performance Marketers The digital advertising industry is currently caught between increasing privacy restrictions (such as Apple’s ATT and the sunsetting of third-party cookies) and the need for higher precision in targeting. In this environment, the only way to maintain performance is through better data utilization. Manus AI provides that bridge. Efficiency Gains and Time Savings For agency owners and in-house marketing teams, time is the most valuable resource. The ability to delegate “grunt work”—like data cleaning and basic report generation—to an AI agent allows human talent to focus on high-level strategy and creative innovation. If Manus AI can reduce the time spent on reporting by even 20%, it equates to hundreds of hours saved across an organization over the course of a year. Faster Optimization Cycles In the world of paid social, speed is a competitive advantage. The faster you can identify that a creative trend is dying or that a specific demographic is over-indexing on cost-per-click (CPC), the faster you can pivot your budget. Manus AI’s real-time reporting capabilities mean that these insights are delivered as they happen, rather than at the end of a weekly or monthly reporting cycle. This enables a more agile approach to budget management. Evidence-Based Decision Making Subjectivity is the enemy of performance marketing. Advertisers often fall into the trap of following a

Uncategorized

Google shifts Lookalike to AI signals in Demand Gen

The Evolution of Audience Targeting in Demand Gen Google is fundamentally restructuring how advertisers reach new customers within its Demand Gen campaigns. In a significant move toward an AI-first ecosystem, Google has announced that Lookalike segments will transition from strict targeting constraints to optimization signals. Scheduled to take full effect by March 2026, this shift represents a departure from the traditional “walled garden” approach to audience building, favoring a more fluid, machine-learning-driven model. Demand Gen campaigns, which replaced Discovery Ads, are designed to capture interest across Google’s most visual and immersive surfaces, including YouTube (Shorts, In-stream, and Feed), Google Discover, and Gmail. Central to these campaigns has been the “Lookalike” segment—a tool that allows advertisers to upload a seed list of existing customers and ask Google to find similar users. Under the new update, the role of that seed list is changing from a hard boundary into a directional compass. The Technical Shift: From Constraints to Signals To understand the weight of this update, it is essential to distinguish between a “constraint” and a “signal.” In the legacy version of Lookalike targeting, advertisers selected a similarity tier: Narrow (top 2.5% of similarity), Balanced (top 5%), or Broad (top 10%). The algorithm was strictly bound to these percentages. If a user fell outside that specific similarity pool, they would not see the ad, regardless of how likely they were to convert at that specific moment. Starting in March 2026, these tiers will act as “optimization signals.” This means that while Google’s AI will prioritize the users within those defined similarity pools, it is no longer forbidden from venturing outside of them. If the system’s predictive modeling identifies a user who is highly likely to convert but technically falls outside the “Broad” 10% similarity tier, the system can now serve an ad to that user. This transition effectively reframes the Lookalike segment. It is no longer a fence that keeps the campaign within a specific yard; it is a signal that tells the AI where to start looking, while granting it the autonomy to follow the scent of a conversion wherever it leads. Comparing the Before and After Models The practical implications for digital marketers are vast. Let’s break down the structural differences between the two models to better understand the impact on day-to-day campaign management. The Legacy Model (Pre-March 2026) Under the old system, advertisers had a high degree of predictability regarding who would see their ads. By choosing a “Narrow” tier, a brand could ensure that their budget was spent only on the users most mathematically similar to their existing customer base. This was ideal for niche products or brands with very specific buyer personas. However, the downside was a “scale ceiling.” Once the system exhausted the high-intent users within that narrow pool, performance would often plateau or costs-per-acquisition (CPA) would spike as the system struggled to find more conversions within a limited set of users. The New Signal-Based Model In the new model, the tiers still exist, but they function as a weighted priority. The AI uses the Lookalike list as a high-quality data source to understand the characteristics of a “good” customer. However, it combines this with real-time intent signals—such as recent search history, app usage, and video consumption—to find conversions that a strict similarity model might miss. This approach is designed to maximize conversion volume and lower the average CPA by allowing the algorithm to bypass the artificial boundaries of a percentage-based list. The Synergy with Optimized Targeting A critical component of this update is how it interacts with Google’s existing “Optimized Targeting” feature. Optimized Targeting is a setting that allows Google to look beyond your selected audience segments to find conversions you may have missed. When Lookalike segments become signals, they will stack with Optimized Targeting to create a powerful, albeit less transparent, engine for growth. If an advertiser enables both, the Lookalike signal provides the “who,” while Optimized Targeting provides the “how and when” for expansion. This layering allows Google’s AI to pursue a broader reach while still keeping the campaign anchored in the brand’s first-party data. For performance marketers, this means the system has more freedom than ever to pursue the most efficient conversions across the entire Google network. Why Google is Moving Toward AI Signals The shift toward signal-based targeting is not an isolated event; it is part of a broader industry trend toward “Black Box” advertising. Several factors are driving Google to make this change, ranging from technical necessity to performance optimization. 1. Overcoming the Scale Cap Strict Lookalike targeting often leads to diminishing returns. As campaigns mature, they frequently hit a wall where they cannot find new users within the narrow similarity pool. By converting these pools into signals, Google allows the campaign to scale more naturally. This is particularly important for Demand Gen campaigns, which are designed to sit at the top and middle of the marketing funnel, where high volume is a primary goal. 2. Navigating a Cookie-Less Future The digital advertising landscape is moving away from granular tracking and third-party cookies. As traditional tracking becomes less reliable, Google is leaning into “modeled behavior.” AI signals allow the system to use aggregated, anonymized data to predict behavior rather than relying on individual tracking. This makes the platform more resilient to privacy changes and browser-level tracking preventions. 3. Reducing Model Complexity Maintaining high-quality similarity models for every single advertiser is a massive computational task. By shifting to a more generalized AI suggestion model, Google can streamline its internal processing while potentially delivering better results for the advertiser through a more holistic view of user intent. Strategic Implications: What Advertisers Need to Do For brands and agencies, the move to signal-based Lookalikes requires a shift in strategy. The focus is moving away from “who we target” and toward “what data we feed the machine.” Prioritize High-Quality First-Party Data Because the Lookalike segment is now a signal, the quality of that signal is more important than ever. Advertisers should focus on

Uncategorized

Google’s Jeff Dean: AI Search relies on classic ranking and retrieval

In the rapidly evolving landscape of artificial intelligence, there is a common misconception that the advent of Large Language Models (LLMs) has completely rewritten the rules of information retrieval. Many observers assume that Google’s transition toward AI-driven results, such as AI Overviews, represents a total abandonment of the “old” search algorithms that have governed the web for decades. However, according to Jeff Dean, Google’s Chief AI Scientist, the reality is far more grounded in tradition than many realize. In a detailed interview on the Latent Space: The AI Engineer Podcast, Dean pulled back the curtain on the architecture powering Google’s modern AI search experiences. His insights reveal a critical truth for developers, SEO professionals, and tech enthusiasts: AI search is not a replacement for classic search infrastructure. Instead, it is a sophisticated layer that sits on top of a foundational system built on decades of ranking, retrieval, and indexing expertise. The Architecture: Filter First, Reason Last The core of Jeff Dean’s explanation centers on a concept that might surprise those who view AI as an all-knowing entity that “reads” the entire internet in real-time. He clarified that Google’s AI systems do not process the whole web simultaneously for every query. Instead, they follow a rigorous, multi-stage pipeline designed for efficiency and accuracy. Dean describes this as a “staged pipeline” that prioritizes filtering before any generative reasoning occurs. Visibility in an AI-generated search result still depends entirely on a document’s ability to clear traditional ranking thresholds. If a piece of content does not make it into the broad candidate pool of search results through standard SEO and ranking signals, it has zero chance of being used by an LLM to synthesize an answer. In essence, the AI doesn’t find the content; the search engine finds the content, and the AI merely explains it. The Candidate Pool: From Trillions to Thousands To understand how this works at scale, we must look at the numbers Dean provided. The internet consists of trillions of tokens—fragments of data that make up the web. When a user enters a query, it is computationally impossible and wildly inefficient for a high-reasoning LLM to scan those trillions of tokens to find an answer. Instead, Google uses “lightweight methods”—the classic retrieval systems—to narrow the field. This first pass identifies a subset of roughly 30,000 documents that are potentially relevant to the user’s intent. This initial culling is done in milliseconds using traditional signals. Dean explained that this process is about “down-ranking” the noise to find a manageable set of “interesting tokens.” Reranking and Refining Once the system has identified the top 30,000 candidates, it doesn’t stop there. Google applies increasingly sophisticated algorithms and signals to refine that list further. This is a tiered process where the cost of computation increases as the number of documents decreases. The system filters the 30,000 documents down to a few hundred, and eventually down to the final set—often around 10 to 100 documents—that are truly relevant to the specific task. Dean refers to the user experience of AI search as an “illusion” of attending to the entire web. While it feels like the AI is searching the whole internet for you, it is actually only “paying attention” to the very small subset of data that the traditional ranking engine has already verified as high-quality and relevant. “You’re going to want to identify what are the 30,000-ish documents… and then how do you go from that into what are the 117 documents I really should be paying attention to?” Dean noted. Matching Intent: Moving from Keywords to Meaning One of the most significant shifts in search over the last several years has been the move from lexical matching (finding exact words) to semantic matching (understanding the meaning behind words). While LLMs have accelerated this trend, Dean pointed out that this evolution is not entirely new; it is a continuation of a journey Google started long ago. In the early days of search, if a user typed “blue suede shoes,” the engine looked for pages that contained those exact three words. If a page used the phrase “azure leather footwear,” it might not show up, even though it was contextually identical. Today, thanks to LLM-based representations of text, Google can move beyond “hard” word overlap. The Power of Topic Overlap Dean explained that LLMs allow Google to evaluate whether a page—or even a specific paragraph within a page—is topically relevant to a query, even if the wording differs entirely. This shift places a premium on topical authority and comprehensive coverage. For content creators, this means that repeating a keyword five times is far less effective than explaining a concept so clearly that the system understands the subject matter’s intent. This “softening” of the definition of a query allows Google to bridge the gap between how people think and how they type. By using LLM representations, the search engine can map the “meaning” of a query to the “meaning” of a document, creating a much more fluid and intuitive discovery process. The 2001 Milestone: Why Query Expansion Changed Everything To provide context for today’s AI advancements, Jeff Dean took a trip down memory lane to 2001. This was a pivotal year for Google, marking the moment when the company moved its entire index from physical disks into RAM (memory) across a massive fleet of machines. Before 2001, adding extra terms to a user’s query was expensive. Every time Google wanted to look for a synonym, it required a “disk seek,” which added latency and slowed down the search for the user. Consequently, the engine had to be very selective about the terms it searched for. Query Expansion in the Pre-LLM Era Once the index was in memory, the technical constraints vanished. Google could suddenly take a three-word query from a user and “expand” it into 50 terms behind the scenes. If a user searched for “cafe,” the system could simultaneously look for “restaurant,” “bistro,” “coffee shop,” and “diner” without any performance penalty. Dean emphasized that

Uncategorized

Why AI optimization is just long-tail SEO done right

The digital marketing landscape is currently undergoing a massive rebranding. If you browse job boards like LinkedIn or Indeed today, you will notice a dizzying array of new acronyms. Companies are no longer just looking for “SEO Specialists”; they are hiring for GEO (Generative Engine Optimization), AEO (Answer Engine Optimization), and AIO (Artificial Intelligence Optimization). Some industry veterans have even jokingly suggested LMAO—Large Model Answer Optimization. While these terms might make for catchy headlines and trendy job titles, they often obscure a fundamental truth: AI optimization is not a brand-new discipline. It is the evolution and refinement of a strategy that savvy marketers have advocated for decades. Specifically, AI optimization is simply long-tail SEO done correctly. In the age of Large Language Models (LLMs), the “long tail” of search is no longer a secondary consideration—it is the main event. Understanding Why LLMs Still Depend on Traditional Search To understand why long-tail SEO is the key to AI visibility, we must first look at how LLMs like GPT-4o, Claude, Gemini, and Grok actually function. At their core, these models are transformers designed to predict the next token (a piece of a word) based on the context of the previous tokens. They are incredibly sophisticated, but they are not omniscient. They are trained on massive, static datasets including Common Crawl, Wikipedia, digitized books, and academic papers. However, training these foundation models is prohibitively expensive and time-consuming. Because of this, companies only run major training cycles every few years. This creates a “knowledge cutoff.” To bridge the gap between their static training data and the real-time needs of users, AI companies rely on Retrieval-Augmented Generation (RAG). When an LLM realizes it doesn’t have the specific, fresh, or highly detailed information needed to answer a prompt, it does exactly what a human would do: it performs a web search. This is a critical point for digital publishers. LLMs are not replacing search engines; they are becoming the world’s most active search engine users. When a user asks an AI a complex question, the AI converts that prompt into a search query and scans the web for the best answer. If your content is the most authoritative answer to that specific query, the AI will cite you. If you haven’t optimized for the long tail, you simply don’t exist in the AI’s worldview. The Shift from Head Terms to the Conversational Tail For the last twenty years, SEO was dominated by “head terms”—short, one- or two-word queries like “best laptops” or “running shoes.” Google’s interface, a single empty text box, conditioned users to be brief. Because head terms drove the most volume, brands focused their entire budgets on ranking for those few high-competition keywords. Long-tail keywords—specific, multi-word phrases—were often treated as an afterthought or a “bonus” source of traffic. That era is ending. The interface of the AI era is conversational. When people interact with ChatGPT or Perplexity, they don’t type “Italian food.” They type, “Find me an authentic Italian restaurant in downtown Chicago that has gluten-free options and is quiet enough for a business meeting.” This level of nuance represents the “fat tail” of search. LLMs take these highly specific human prompts and translate them into detailed search queries. They are looking for content that matches the specificity of the user’s intent. The brands that win in this environment are the ones that have already built a library of content addressing these niche, detailed, and specific questions. The “head” is shrinking, and the “tail” is becoming the primary driver of brand visibility. Who are the LLMs searching? It is important to know which search engines these AI models are using to find their answers. While the partnerships are sometimes opaque, the current ecosystem generally looks like this: ChatGPT: Primarily utilizes Bing Search for real-time web access. Claude: Often integrates with Brave Search. Gemini: Naturally relies on Google Search. Grok: Uses a combination of X (formerly Twitter) search and its own internal web indexing tools. Perplexity: Operates its own hybrid index, combining multiple sources to provide real-time citations. As billions of monthly searches transition from traditional engines to AI interfaces, the number of queries these LLMs perform on behalf of users will grow exponentially. To be visible, you must rank in the search engines these models trust. Leveraging AI to Master Long-Tail SEO Strategy The irony of the AI era is that the very tools changing the industry can also be used to master it. Long-tail SEO has always been difficult because it requires a deep understanding of customer psychology and a massive volume of content. In the past, researching these topics took weeks. Now, you can use LLMs to accelerate the process. 1. Identifying Real Customer Questions The foundation of long-tail SEO is understanding the specific problems your audience is trying to solve. You can use an LLM to act as a research analyst. Instead of just asking for “keyword ideas,” you should prompt the AI to model the actual journey of your customer. Try using a prompt similar to this to uncover high-intent long-tail opportunities: “Act as an SEO strategist and customer research analyst. I want to discover long-tail search questions real people might ask about my business. Generate 75-100 realistic, natural-language search queries grouped by Awareness, Consideration, Decision, and Post-Purchase. Focus on specificity, pain points, and comparison questions rather than generic keywords.” By forcing the AI to think in terms of customer stages, you move away from repetitive keyword lists and toward a content map that reflects real-world needs. These specific queries are exactly what LLMs look for when they perform RAG-based searches. 2. Mining Your Own Data Goldmine: Site Search One of the most overlooked assets in SEO is internal site search data. When a user is already on your website and uses the search bar, they are telling you exactly what they couldn’t find through your navigation. This is pure, unadulterated long-tail intent. Analyzing thousands of site search queries used to be a grueling manual task. Now, you can

Uncategorized

Google Search Console AI-powered configuration rolling out

The Evolution of Search Data Analysis: Google Search Console Embraces AI For years, Google Search Console has served as the bedrock of organic search data for webmasters, SEO professionals, and digital marketers. It is the primary bridge between a website and Google’s search index, providing invaluable insights into how a site is discovered, crawled, and indexed. However, as websites grow in complexity and the volume of search data increases, extracting specific, actionable insights from the Performance report has often required a significant amount of manual effort—drilling down through layers of filters, configuring date comparisons, and toggling specific metrics. Recognizing the need for a more streamlined approach to data analysis, Google has officially begun the wide-scale rollout of its AI-powered configuration tool within Google Search Console. After several months of limited testing, this feature is now becoming available to the global SEO community. This update represents a major shift in how users interact with search data, moving away from purely manual interface interactions toward a natural language processing model that allows for more intuitive, conversational data exploration. Understanding the AI-Powered Configuration Tool The AI-powered configuration is a generative assistant integrated directly into the Search Console interface. Its primary function is to transform natural language descriptions into technical report settings. Instead of a user manually selecting dimensions, metrics, and filter operators, they can simply describe the specific analysis they wish to perform in plain English. Google’s implementation of this tool aims to bridge the gap between complex data needs and the technical knowledge required to navigate the GSC UI. By interpreting user intent, the AI automatically configures the Performance report, applying the necessary filters for queries, pages, countries, and devices, while also selecting the relevant metrics (Clicks, Impressions, CTR, and Position) to answer the user’s specific question. This rollout follows a successful testing phase that began roughly two months ago. During that period, select users were given early access to experiment with dynamic reporting. Google’s recent announcement on LinkedIn confirms that the testing phase has concluded, and the feature is now being pushed to all users globally. How the New AI Configuration Works When you log in to your Google Search Console account, you may notice a notification at the top of your Performance report that says, “New! Customize your Performance report using AI.” Clicking on this call-to-action opens a dialogue box where the magic happens. This is the new command center for your search data. The system is designed to handle three core elements of reporting that traditionally took multiple clicks to set up: 1. Automatic Metric Selection In the standard Performance report, users often have to manually toggle checkboxes for Clicks, Impressions, Average CTR, and Average Position. The AI-powered tool automatically determines which of these metrics are most relevant to your request. For example, if you ask, “How visible was my site last week?”, the AI will prioritize Impressions. If you ask, “Which pages are driving the most traffic?”, it will focus on Clicks. 2. Dynamic Filter Application Filtering is perhaps the most powerful part of Google Search Console, but it can be cumbersome. The AI tool allows users to narrow down data sets instantly. It can interpret requests to filter by query (e.g., “queries containing ‘best shoes’”), page (e.g., “traffic to my blog category”), country, device, and search appearance (such as How-to results or FAQ snippets). It handles the logic of “contains,” “does not contain,” and “exact match” based on the phrasing of your natural language input. 3. Complex Comparison Configuration One of the most time-consuming tasks in GSC is setting up custom date comparisons or comparing performance across different devices. The AI configuration excels at this. You can ask it to “Compare last month’s mobile clicks to the same month last year,” and the tool will instantly set up the date ranges and device filters that would otherwise require several manual steps. Why the Move to AI Matters for SEO Professionals The introduction of AI into Google Search Console is more than just a convenience feature; it is a fundamental change in the workflow of search engine optimization. Here are several reasons why this rollout is significant for the industry: Increased Efficiency and Speed For agency-side SEOs managing dozens of properties, every minute saved on data extraction is a minute that can be spent on strategy and implementation. The AI tool reduces the “time-to-insight.” Instead of spending five minutes navigating menus to create a specific year-over-year report for a specific subfolder, the user can get the result in seconds. This allows for a more “stream-of-consciousness” approach to data analysis, where ideas can be tested and verified as quickly as they are thought of. Lowering the Barrier to Entry Data analysis is a specialized skill. For small business owners or entry-level marketers who may find the technical UI of Search Console intimidating, the natural language interface provides a welcoming entry point. It democratizes access to deep data insights, ensuring that you don’t need to be a “power user” to understand how your website is performing in search. Reduced Risk of User Error Manually setting up filters—especially complex Regex (Regular Expression) filters or multi-layered date comparisons—leaves room for error. A single mistyped character or an incorrectly selected “not contains” filter can lead to inaccurate data interpretations. By allowing the AI to translate clear intent into technical configurations, the likelihood of configuration errors is minimized, provided the prompt is clear. Practical Use Cases for the AI Tool To get the most out of the new rollout, it helps to understand the types of questions the AI can handle. Here are several practical ways you can use the AI-powered configuration today: Query Analysis: “Show me all queries that contain the word ‘tutorial’ but excluding those that mention ‘video’.” Geographic Performance: “How has my organic traffic in the United Kingdom changed over the last 90 days compared to the previous period?” Device Trends: “Show me my average position on mobile devices for the last 30 days for my homepage.” Content Audits: “Filter

Uncategorized

Rand Fishkin proved AI recommendations are inconsistent – here’s why and how to fix it

Rand Fishkin, the founder of SparkToro and a titan in the world of search engine optimization, recently published what many are calling the most critical piece of primary research the AI visibility industry has seen to date. In collaboration with Patrick O’Donnell, Fishkin’s study meticulously dismantles the long-held assumption that AI tools function like traditional search engines with stable, predictable rankings. His core conclusion is striking: AI models produce wildly inconsistent brand recommendation lists. This variability is so high that the very concept of a “ranking position” in an AI world is effectively meaningless. While many in the marketing world were stunned by these findings, the research highlights a deeper, more structural reality about how Large Language Models (LLMs) operate. They are not deterministic lookup tables; they are probability engines. Fishkin’s data proves the problem, but to solve it, we must look deeper into the mechanics of “confidence” and how AI systems build trust in a brand. The Death of the AI Ranking Position Myth For decades, SEO professionals have obsessed over “Rank #1.” Whether it was on Google or Bing, the goal was to secure a specific spot on a page. When ChatGPT, Claude, and Gemini emerged, marketers naturally tried to apply this same logic. They wanted to know: “How do I rank #1 in ChatGPT?” Fishkin and O’Donnell’s research proves that this question is fundamentally flawed. They ran 2,961 prompts across the leading AI platforms, focusing on brand recommendations across 12 distinct categories. The results were chaotic. Fewer than 1 in 100 runs produced the same list of brands, and fewer than 1 in 1,000 produced the same brands in the same order. As Fishkin puts it, treating these platforms as deterministic ranking systems is “provably nonsensical.” However, Fishkin also discovered a pattern within the chaos. While the specific “rank” was inconsistent, some brands appeared much more frequently than others. This led to a shift in focus from “rank position” to “visibility percentage.” If a brand shows up in 95% of queries for a specific category, it is a dominant player, regardless of whether it appears first or third in a specific session. This variance is where the real story of AI optimization begins. Why AI Recommendations Are Inconsistent To understand why Fishkin’s lists changed every time he hit “enter,” we have to understand that AI platforms are confidence engines, not recommendation engines. When you ask ChatGPT for the “best cancer care hospitals,” it doesn’t search a database. Instead, it generates a response based on a probability distribution shaped by three key factors: What the model “knows” from its massive training corpus. How confidently it knows that information based on the weight of the data. What specific information it retrieved or “grounded” itself with at the exact moment of the query. When a model is highly confident about an entity’s relevance, that entity appears consistently. For example, in Fishkin’s data, “City of Hope” appeared in 97% of cancer care responses. This isn’t luck; it’s the result of deep, corroborated, multi-source presence in the data the AI consumes. Conversely, brands that appear only 5% to 10% of the time reside in a “low-confidence zone.” The AI knows they exist, but it hasn’t found enough corroborating evidence to commit to them consistently. The Framework of Cascading Confidence To fix the inconsistency problem, brands must move from the “inconsistent pile” to the “consistent pile.” This requires navigating what is known as the “Cascading Confidence” framework. This is a multi-stage pipeline—formalized as DSCRI-ARGDW—that every piece of content must pass through before it can influence an AI recommendation. The pipeline consists of ten distinct gates: Discovered, Selected, Crawled, Rendered, Indexed, Annotated, Recruited, Grounded, Displayed, and Won. At every single stage, the AI system asks: “How confident am I in this content?” The Multiplicative Nature of AI Trust Confidence in an AI system is not additive; it is multiplicative. This is a crucial distinction that many marketers miss. If a brand has 90% confidence at each of the ten stages, the final end-to-end confidence is not 90%—it is 0.9 raised to the tenth power, which equals roughly 35%. If confidence drops to 80% per stage, the total confidence plummets to 11%. One single failure point—such as a website that is slow to render or has inconsistent information—can destroy the entire “bid” for an AI recommendation. This principle was echoed years ago by Google’s Gary Illyes, who noted that a zero on any single ranking factor kills the entire ranking bid. In the age of AI, this “cascading confidence” is what determines whether your brand is a 97% “City of Hope” or a 5% “also-ran.” The Three Graphs Model: How AI Sees the World AI systems do not rely on a single source of truth. Instead, they pull from three different knowledge representations simultaneously. Understanding how your brand lives within these three “graphs” is the key to achieving universal visibility. 1. The Entity Graph (Knowledge Graph) This is a database of explicit entities and their relationships. It contains binary, verified facts. Either a brand is in the knowledge graph, or it isn’t. This graph has low “fuzziness.” It is the foundation of identity. 2. The Document Graph (Search Engine Index) This is the traditional territory of SEO. It consists of annotated URLs and ranked pages. It has medium fuzziness. AI models use this graph to “ground” their answers in real-time web data to prevent hallucinations. 3. The Concept Graph (LLM Parametric Knowledge) This is the learned association within the model itself. It is where “fuzziness” is highest and where Fishkin’s documented inconsistency originates. This graph is built during the training phase and represents the AI’s internal “understanding” of a topic. Brands that achieve near-universal visibility are present across all three graphs. They have a strong presence in the Knowledge Graph, high-ranking authoritative pages in the Document Graph, and deep encoding in the Concept Graph. If a brand is missing from one, the AI hedges its bets, leading to the inconsistency Fishkin observed. Crossing the

Uncategorized

Google Ads adds beta data source integrations to conversion settings

Google Ads is currently rolling out a significant update to its conversion measurement infrastructure, introducing a beta feature that allows advertisers to integrate external data sources directly within their conversion action settings. This move represents a major shift in how the platform handles first-party data, aiming to bridge the gap between backend customer databases and front-facing advertising performance metrics. As the digital advertising landscape continues to grapple with the decline of third-party cookies and the increasing importance of privacy-centric measurement, Google is doubling down on tools that allow brands to leverage their own data more effectively. By embedding these data connections directly into the conversion setup process, Google Ads is streamlining what was previously a complex technical workflow, making high-level data integration more accessible to businesses of all sizes. Understanding the New Data Source Integration Beta The new feature appears as a highlighted prompt within the conversion action details section of the Google Ads interface. Specifically, users will find a new module labeled “Get deeper insights about your customers’ behavior to improve measurement.” This section encourages advertisers to connect their Google tag to external databases to enrich the data signals being sent back to the platform. At the time of the beta rollout, the supported integrations include industry-standard platforms such as Google’s own BigQuery and MySQL. By creating a direct pipeline between these databases and Google Ads conversion settings, advertisers can ensure that their campaign measurement is supported by the most accurate, up-to-date information stored in their own internal systems. Historically, syncing backend data with Google Ads required manual CSV uploads through Offline Conversion Imports (OCI), complex API integrations, or third-party middleware tools. While these methods are still available, the native integration within the conversion settings menu signifies a move toward a “no-code” or “low-code” environment for advanced data management. The Critical Role of First-Party Data in 2025 and Beyond To understand why this update is so critical, one must look at the broader context of the advertising industry. With the implementation of privacy frameworks like Apple’s App Tracking Transparency (ATT) and the ongoing transition away from traditional tracking methods, “signal loss” has become a primary concern for digital marketers. Signal loss occurs when the path between an ad click and a final purchase becomes obscured, making it difficult for algorithms to know which ads are actually driving revenue. First-party data—information that a company collects directly from its customers—is the most resilient solution to this problem. When an advertiser can tell Google Ads, “This specific user who clicked an ad last week has now completed a high-value purchase recorded in our MySQL database,” the platform can use that information to optimize its bidding strategies. This direct integration ensures that the “feedback loop” for Google’s machine learning models remains intact, even when browser-based tracking fails. How the Integration Improves Measurement and Performance The integration of BigQuery and MySQL directly into conversion settings offers several immediate benefits for campaign performance and reporting. By enriching conversion metrics with backend data, advertisers can move beyond simple “thank you page” tracking and start measuring the actions that truly drive business growth. Enhanced Conversion Accuracy Browser-based tracking is prone to errors. Users might clear their cookies, use ad blockers, or switch devices between the initial click and the final conversion. By pulling data directly from a data warehouse like BigQuery, advertisers can reconcile these discrepancies. This ensures that every conversion recorded in the CRM or backend database is properly attributed to the corresponding ad interaction, providing a much clearer picture of Return on Ad Spend (ROAS). Optimizing for High-Value Actions Not all conversions are created equal. A simple lead form submission might be worth $10, but a lead that eventually turns into a closed-won deal might be worth $10,000. By connecting backend databases, advertisers can feed the final transaction value back into Google Ads. This allows the platform’s Smart Bidding algorithms to focus on finding more users who resemble the “high-value” customers rather than just “high-volume” leads. Closing the Offline-to-Online Gap For businesses with long sales cycles or offline components—such as automotive dealerships, real estate agencies, or B2B software companies—the connection between an online ad and an offline sale is often broken. Native data source integrations make it easier to sync these offline milestones. When a status changes in a MySQL database (e.g., “Lead” to “Contract Signed”), that update can be reflected in Google Ads more seamlessly than ever before. Streamlining the Technical Workflow for Advertisers One of the most noteworthy aspects of this beta is where it lives: inside the conversion settings. Previously, setting up data pipelines was often relegated to the “Linked Accounts” section or required extensive work within Google Tag Manager. By placing the integration prompt directly where advertisers define their success metrics, Google is making advanced measurement a standard part of campaign setup rather than an afterthought. This accessibility is a game-changer for mid-market advertisers who may not have dedicated data science teams. For an enterprise, setting up a BigQuery pipeline is standard operating procedure. For a growing e-commerce brand or a regional service provider, it used to be a daunting technical hurdle. The new beta simplifies the authentication and mapping process, reducing the friction that often prevents businesses from utilizing their most valuable data assets. Strategic Implications: Smarter Bidding and Attribution Google Ads relies heavily on automated bidding strategies like Target CPA (Cost Per Acquisition) and Target ROAS. These systems are only as good as the data they receive. In data science, there is a common saying: “Garbage in, garbage out.” If the data fed into Google Ads is incomplete or inaccurate, the bidding algorithm will make sub-optimal decisions. By integrating direct data sources, advertisers are providing Google with “high-fidelity” signals. This leads to several strategic advantages: Improved Attribution Modeling With a direct link to a data warehouse, Google Ads can better understand the customer journey across different touchpoints. If a customer interacts with multiple ads over a period of weeks before a record is updated in a MySQL database,

Uncategorized

How to create a persona GPT for SEO audience research

The Evolution of Audience Research in the Age of AI In a perfect marketing world, you would have a direct line to your most valuable customers. Before hitting “publish” on a high-stakes blog post or launching a new service page, you would simply pick up the phone, call a representative user, and ask them if your content truly solves their problems. In reality, the logistics of modern SEO make this nearly impossible to scale. Conducting manual audience interviews for every single content update or new topic is prohibitively expensive and time-consuming for most digital marketing teams. A few years ago, the path to ranking was much more linear. If you understood keyword intent and produced high-quality content that satisfied that intent, you could reasonably expect to climb to the top of Google’s search engine results pages (SERPs). But the landscape has shifted. We have entered an era where search engines are powered by sophisticated AI models, and user expectations have risen accordingly. Today, searchers aren’t just looking for information; they are looking for relevance, empathy, and specific solutions that acknowledge their unique pain points. Audience research has moved from being a “nice-to-have” luxury to a critical pillar of SEO. However, the resource gap remains a significant hurdle. This is where custom GPTs enter the frame. By configuring a tailored version of ChatGPT with your specific persona research, you can create a digital “sounding board” that mimics your target audience. While these persona GPTs are not absolute replacements for human interaction, they serve as powerful tools to identify gaps in your content, refine your brand voice, and ensure your SEO strategy aligns with the real-world needs of your customers. Establishing a Solid Foundation: Performing Audience Research Before you can build an AI persona, you need raw, authentic data. A custom GPT is only as good as the information you feed it. To move beyond generic “target demographics” and into the “why” behind search intent, you need to employ diverse research methods. Understanding the emotional triggers and day-to-day challenges of your audience is what separates generic content from high-converting SEO assets. Here are several practical, high-impact methods and tools to gather the data necessary for your persona GPT: Utilizing SparkToro for Audience Intelligence SparkToro is an essential tool for understanding the digital ecosystem where your audience lives. Unlike traditional SEO tools that focus on keywords, SparkToro allows you to search by website, interest, or specific social media handles. This helps you identify what your audience reads, who they follow, and what podcasts they listen to. By segmenting different audience types here, you can provide your GPT with a list of “influences” that shape your persona’s worldview. Mastering Review Mining One of the most authentic ways to understand customer sentiment is to look at what they say when they think you aren’t listening. Review mining involves scraping or manually reviewing feedback from your own company or your competitors on platforms like G2, Capterra, Amazon, or Google My Business. Look for recurring patterns: What specific features do they praise? What common frustrations lead to a one-star review? Understanding the “why” behind their satisfaction or disappointment provides the emotional depth your AI persona needs to feel realistic. Analyzing Sales Calls and Lead Interactions Your sales and customer success teams are on the front lines every day. Listening to recorded sales calls or reviewing lead notes is a goldmine for SEO research. These interactions reveal the exact phrasing customers use when describing their problems. You can hear the urgency in their voices and identify the specific questions that often precede a conversion. Capturing these real-world queries allows you to build a GPT that can accurately predict how a customer might react to a specific call to action (CTA). How to Construct a Comprehensive Customer Persona Once you have gathered your raw data, the next step is to synthesize it into a structured persona. Think of this as the “biography” of your target user. While tools like Figma and FigJam are excellent for visually mapping these personas, the content of the persona is what truly matters for the GPT configuration. A high-quality SEO persona should include the following elements: Bio and Psychographic Traits Give your persona a name and a narrative background. Are they “Tech-Savvy Tina,” a middle-manager under pressure to cut costs, or “Founder Fred,” who is struggling to scale a small team? Use trait sliders to define their personality: Are they risk-averse or adventurous? Analytical or emotional? These nuances help the GPT adjust its tone when reviewing your content. Goals and Deep-Seated Pain Points Clearly define what your persona is trying to achieve and what is standing in their way. Pain points are often the primary drivers of search queries. If your persona’s main pain point is “wasting time on manual data entry,” your GPT will be able to flag content that is too fluff-heavy and doesn’t get to the solution fast enough. User Stories and Emotional Journeys Map out a day in the life of your persona. What triggers them to search for your solution? How do they feel before they find you (anxious, overwhelmed, curious) and how should they feel after interacting with your brand (relieved, empowered, confident)? Defining this emotional arc ensures your SEO content isn’t just informative, but also resonant. Trigger Words and Content Focus Identify specific words or phrases that grab your persona’s attention. Conversely, note “turn-off” words that might sound too corporate or too informal for their taste. This level of detail allows the GPT to act as a copy editor, scanning your drafts for language that might alienate your core audience. Step-by-Step: Creating a Custom GPT for Your Persona With your research finalized and your persona mapped out, you are ready to bring them to life within ChatGPT. The “Custom GPT” feature allows you to build a specialized version of the model that only operates based on the instructions and data you provide. Accessing the GPT Builder Log in to your ChatGPT account and navigate to the

Uncategorized

15 Fixes To Improve Low Conversion Rates In Google Ads via @sejournal, @brookeosmundson

Introduction Running a Google Ads campaign can often feel like a high-stakes balancing act. On one hand, you are bidding against competitors for prime digital real estate; on the other, you are trying to convince a skeptical audience to click through and complete a specific action. For many digital marketers and business owners, the frustration begins when the clicks start rolling in, but the sales or leads do not. High traffic with a low conversion rate is a recipe for a depleted budget and a negative return on investment (ROI). A low conversion rate is rarely the result of a single error. Instead, it is usually a combination of technical mismatches, poor user experience, and a misalignment between what the user expects and what you are offering. To turn the tide, you must look beyond the surface-level metrics and perform a deep dive into your account structure, your creative assets, and your post-click environment. Below are 15 comprehensive fixes designed to diagnose and repair low conversion rates in Google Ads, ensuring that every dollar spent is an investment toward growth rather than a sunk cost. 1. Audit and Verify Conversion Tracking Accuracy Before making any structural changes to your campaigns, you must ensure your data is accurate. It is impossible to optimize for conversions if your tracking is broken, doubled, or missing entirely. Many accounts suffer from “phantom conversions” (where a page refresh triggers a conversion) or missed conversions (where the tag fails to fire on mobile devices). Start by auditing your Google Tag Manager (GTM) setup. Ensure that your conversion linker tag is active and that your Google Ads Conversion Tracking tags are firing only on the intended success pages, such as a “Thank You” or order confirmation screen. With the industry move toward GA4 (Google Analytics 4), verify that your web streams are correctly linked to your Google Ads account. Use the “Tag Assistant” to simulate a conversion and confirm that the data reaches your dashboard in real-time. Without clean data, your bidding strategies—especially automated ones—will fail. 2. Align Keyword Intent with the Sales Funnel A common mistake in Google Ads is targeting keywords that are too broad or purely informational. If someone searches for “what is cloud computing,” they are likely in the research phase and are unlikely to convert immediately. Conversely, someone searching for “enterprise cloud computing pricing” is much further down the funnel. Review your keyword list and categorize them by intent: Informational, Navigational, and Transactional. To improve conversion rates, shift your budget toward transactional keywords. These are terms where the user’s intent to buy is clear. While these keywords often have a higher Cost Per Click (CPC), their higher conversion rate typically leads to a lower Cost Per Acquisition (CPA). 3. Optimize the Post-Click Landing Page Experience Google Ads can only get a user to your website; the landing page is responsible for closing the deal. If there is a “scent” mismatch—meaning the landing page doesn’t look or feel like the ad that preceded it—the user will bounce immediately. Ensure your landing page is highly relevant to the specific keyword group. If your ad promises “50% off Gaming Keyboards,” the landing page should immediately display those keyboards and that discount. Furthermore, optimize for speed. A one-second delay in mobile load times can decrease conversion rates by up to 20%. Use tools like Google PageSpeed Insights to identify bottlenecks and ensure your site is lean and responsive. 4. Implement a Robust Negative Keyword List Negative keywords are your primary defense against wasted spend. If you are selling high-end luxury watches, you don’t want your ads appearing for searches like “free watches,” “cheap watches,” or “how to repair a watch.” Regularly mine your Search Terms Report to find queries that triggered your ads but didn’t result in conversions. Add these as negative keywords at the campaign or account level. By filtering out irrelevant traffic, you ensure that your budget is preserved for users who are actually looking for your specific product or service. 5. Craft Compelling, Benefit-Driven Ad Copy Your ad copy needs to do more than just describe what you sell; it needs to solve a problem or fulfill a desire. Many low-converting ads focus too much on features and not enough on benefits. Instead of saying “We have 24/7 support,” try “Get instant help whenever you need it.” Use emotional triggers and clear calls to action (CTAs). Words like “Get,” “Save,” “Build,” and “Join” provide a clear instruction to the user. Additionally, utilize Responsive Search Ads (RSAs) to their full potential by providing all 15 headlines and 4 descriptions, allowing Google’s AI to test which combinations drive the most conversions. 6. Maximize Use of Ad Assets (Extensions) Ad assets (formerly known as extensions) increase your ad’s real estate on the Search Engine Results Page (SERP) and provide more reasons for users to click. They also improve your Quality Score, which can lower your CPC. At a minimum, you should use Sitelink Assets to point to specific pages, Callout Assets to highlight unique selling points (like “Free Shipping”), and Structured Snippet Assets to show a variety of products. If you are a local business, Location Assets are non-negotiable. More information upfront means that the people who eventually click are better informed, making them more likely to convert once they arrive. 7. Re-evaluate Your Bidding Strategy If your conversion rate is low, you might be using the wrong bidding strategy for your goals. If you have enough historical data (usually 30+ conversions in the last 30 days), switching to “Maximize Conversions” or “Target CPA” (tCPA) can allow Google’s machine learning to find users most likely to convert. However, if you are a new account, automated bidding can sometimes work against you because the algorithm hasn’t learned your audience yet. In these cases, using Enhanced CPC (eCPC) or Manual CPC allows you to maintain tighter control over your spend until you have enough data for the AI to take over effectively. 8. Segment Performance by Device

Scroll to Top