Uncategorized

Uncategorized

How schema markup fits into AI search — without the hype

The Evolution of Search: From Keywords to Entities For over two decades, search engine optimization was largely a game of keywords, backlink profiles, and technical site performance. However, the rise of Large Language Models (LLMs) and generative AI has fundamentally altered the landscape. We are moving away from a world of “blue links” and toward a world of “entities.” Search is shifting from surfacing a SERP (Search Engine Results Page) with simple links to AI Overviews, generative answers, and chat-style summaries. These systems do more than just find a page that contains a keyword; they collate content, summarize information, and provide direct answers. To get your content to appear in this new model, your site must be understood as a collection of entities—singular, unique things or concepts, such as a person, place, or event—and the specific relationships between them. Schema markup, or structured data, is one of the few tools SEO professionals have to make those entities and relationships explicit. It serves as a bridge between the messy, unstructured prose of a human-readable webpage and the rigid, data-driven needs of an AI system. But does schema markup really benefit AI search optimization? Some claim it can triple your citations or dramatically boost visibility. In reality, the evidence is more nuanced. Let’s separate what is known from what is assumed and look at how schema actually fits into a modern AI search strategy. How Schema Fits Into AI Search Now In the era of generative AI, systems like Google’s Gemini and Microsoft’s Copilot do not just “read” your website like a human would. They process data to build a knowledge graph. For an AI to accurately represent your brand or answer a query using your data, three elements matter the most: 1. Entity Definition An AI needs to know exactly what is on a page. Is the page about a specific product, a professional service, a person, or a news event? Schema allows you to define these entities clearly. By using specific types like Product, Service, or Organization, you remove the guesswork for the LLM. It no longer has to infer the subject matter; you have explicitly declared it. 2. Attribute Clarity Once the entity is identified, the AI needs to know its properties. For a product, this includes the price, currency, availability, and user ratings. For an author, it includes their job title and area of expertise. Schema markup provides a standardized format for these attributes, ensuring that when an AI Overview extracts a price or a rating, it does so with 100% accuracy. 3. Entity Relationships This is perhaps the most critical component for AI search. Entities do not exist in a vacuum. A product is offeredBy an organization; an article is authoredBy a person; a person worksFor a company. Using schema tags like sameAs also helps connect your site’s entities to established external sources like Wikipedia, LinkedIn, or official databases. This builds a web of trust and context that AI systems can follow. When schema is implemented with stable values (@id) and a logical structure (@graph), it starts to behave like a small internal knowledge graph. AI systems won’t have to guess who you are or how your content fits together. Instead, they can follow explicit connections between your brand, your authors, and your topics. How AI Search Platforms Use Schema While the broader SEO community often speculates on how AI uses data, we have concrete confirmation from the two biggest players in the space. For these platforms, schema is confirmed infrastructure, not a theoretical advantage. Google AI Overviews In April 2025, the Google Search team explicitly stated that structured data remains essential in the AI search era. They confirmed that structured data gives an advantage in how content is interpreted and surfaced within AI Overviews. Because Google has spent years building its Knowledge Graph, it relies heavily on schema to verify the facts it presents in its generative summaries. Microsoft Bing Copilot Microsoft has been equally transparent. Fabrice Canel, a principal product manager at Microsoft Bing, confirmed in March 2025 that schema markup directly helps Microsoft’s LLMs understand content for Copilot. By providing structured data, you are essentially “pre-processing” your content for Bing’s AI, making it easier for the model to cite you as a source of truth. The “Black Box” of ChatGPT and Perplexity The situation is different for platforms like ChatGPT and Perplexity. While these tools are rapidly becoming search engines in their own right, they haven’t publicly confirmed exactly how they use schema. We don’t yet know if they preserve schema during their web crawling process or if they use it for data extraction. While LLMs are technically capable of reading JSON-LD (the format used for schema), it remains unclear if their search indices prioritize it. For now, optimizing for these platforms requires a focus on clear, authoritative prose, with schema serving as a secondary supporting layer. Analyzing Research on Schema and AI To understand the true impact of schema, we have to look at the data. Recent studies provide a reality check against the hype, showing that while schema is powerful, it is not a “magic button” for rankings. The Citation Gap A study conducted in December 2024 by Search/Atlas looked at the correlation between schema markup and citation rates in AI search results. Surprisingly, the study found no direct correlation. Sites with comprehensive, “perfect” schema did not consistently outperform sites with minimal or no schema. This finding is vital for SEOs to understand: schema alone does not drive citations. LLM systems prioritize relevance, topical authority, and semantic clarity above all else. If your content is poorly written or irrelevant to the query, great schema won’t save it. Schema is an amplifier, not a replacement for quality. The Extraction Accuracy Advantage While schema might not guarantee a citation, it significantly improves the accuracy of the information extracted. A February 2024 study published in Nature Communications found that LLMs perform significantly better when given structured prompts with defined fields compared to unstructured instructions.

Uncategorized

TikTok ad creative has a shorter shelf life. Here’s how to keep up

TikTok ad creative has a shorter shelf life. Here’s how to keep up Every digital marketer has experienced the specific sting of a TikTok campaign that starts with a bang and ends with a whimper. You launch a new ad set, and for the first 48 hours, the metrics are a dream. Your cost-per-click (CPC) is bottoming out, the click-through rate (CTR) is climbing, and your return on ad spend (ROAS) makes you look like a genius in the weekly marketing meeting. Then, almost as if someone flipped a switch, the performance collapses. Frequency starts to creep up, meaning the same users are seeing your ad repeatedly. Your hook rate—the percentage of people who watch the first few seconds—plummets. Suddenly, you are back at square one, wondering where the magic went. In traditional digital advertising, we call this creative fatigue. On TikTok, however, it is something more aggressive: creative exhaustion. The “half-life” of a TikTok ad is shorter than on any other major advertising platform. If you attempt to run your TikTok strategy using the same playbooks you use for Meta, Google, or Pinterest, you will inevitably lose money. To win on this platform, you have to stop treating creative as a “campaign asset” and start treating it as a “supply chain.” Why TikTok creative decays so quickly To understand why ads die so fast on TikTok, we have to look at the psychology of the platform. On intent-based platforms like Google or Amazon, users are actively searching for solutions. On social platforms like Facebook or Instagram, users are primarily there to connect with family and friends. TikTok is different. Above all else, TikTok is an entertainment platform. The TikTok algorithm is built on a “content graph” rather than a “social graph.” This means the platform doesn’t prioritize who you follow; it prioritizes what you enjoy. This creates a high-velocity environment where novelty is the primary currency. Because the “For You Page” (FYP) is designed to constantly introduce users to new creators and concepts, the moment a piece of content feels repetitive or “stale,” the user swipes away instantly. Your creative decays faster because you aren’t just competing with other brands; you are competing with millions of creators who are publishing fresh, high-quality entertainment every second. If your production process relies on long feedback loops—weeks spent on storyboarding, professional shoots, and multiple rounds of executive approval—you have already lost. By the time your “perfect” ad goes live, the trend has shifted, the audio is no longer trending, and your audience has moved on to the next big thing. Shifting to a creative supply chain model The secret to sustained success on TikTok is high-volume testing and rapid iteration. You cannot rely on one “hero” video to carry your brand for a quarter. Instead, you need a system that functions like a fast-moving supply chain. This involves three distinct stages: 1. Raw Materials This is your library of unpolished footage. It includes B-roll of your product in use, unboxing videos, customer testimonials recorded on a smartphone, and natural, unscripted reactions from your team. These “raw materials” should be collected constantly, not just during scheduled shoots. The goal is to have a massive database of visual assets that can be pulled into an edit at a moment’s notice. 2. Processing Processing is the rapid assembly of those raw materials into finished ads. Instead of creating one long video, you create modules. You combine a new trending hook with an existing body of value and a tested call to action (CTA). This allows you to produce dozens of variations from the same set of raw footage. 3. Distribution This is the high-volume testing phase. You deploy your modular variations to see which ones the algorithm picks up. TikTok’s algorithm is incredibly efficient at finding an audience for a specific piece of content; your job is to give it enough options to find the “winner.” The power of modular creative One of the biggest bottlenecks in TikTok advertising is the belief that every ad needs to be a unique, standalone production. This is a recipe for burnout and budget waste. Instead, embrace the concept of modular creative. By breaking your ads down into three distinct components, you can exponentially increase your output. The Hook (0:00–0:03) The hook is the most volatile and critical part of your ad. It is responsible for stopping the scroll. Because the hook is what users see first, it fatigues faster than any other part of the video. To combat this, you should film five to seven variations of a hook for every single ad concept. Effective hooks often use “pattern interrupts”—visual or auditory triggers that break the user’s mindless swiping. This could be someone throwing a box toward the camera, starting a sentence mid-action, or using a “green screen” effect to react to a controversial headline or a glowing customer review. Try using negative constraints, such as: “Stop doing [common mistake] if you want to see [specific result].” The Body (0:04–0:15) If the hook stops the scroll, the body retains the attention. This is where you deliver the value proposition, show the product in action, or tell a brief story. The body of the ad tends to have a longer shelf life than the hook because users only see it if they’ve already committed to the video. In this section, focus on “Us vs. Them” split-screens or first-person demonstrations. Show the product being used in real-life settings—at a messy kitchen counter, in a crowded gym, or at a work desk. The more “native” and less “produced” the body feels, the more likely a user is to trust the message. The Call to Action (The last 3–5 seconds) The CTA is where you close the deal. While “Shop Now” is the standard, TikTok users often respond better to psychological triggers and low-friction entries. You might test scarcity (“Our last drop sold out in 48 hours”) or a low-commitment offer (“Take our 2-minute quiz to find your perfect fit”). When

Uncategorized

The first-party data illusion by AtData

The Shift Toward a First-Party Future For the better part of a decade, the digital marketing landscape has been undergoing a seismic transformation. Driven by tightening privacy regulations like GDPR and CCPA, as well as the long-anticipated (and often delayed) deprecation of third-party cookies, organizations have been forced to rethink how they identify and engage with their audiences. The industry-wide consensus emerged quickly: first-party data was the promised land. The logic seemed foolproof. By collecting data directly from customers through owned channels—websites, mobile apps, and point-of-sale systems—brands could build more durable, transparent, and compliant relationships. Marketing leaders were told to collect as much as possible, centralize it in massive Data Warehouses or Customer Data Platforms (CDPs), and build their entire business strategy around this proprietary goldmine. This shift was, in many ways, a positive evolution. It prioritized consent, reduced reliance on “rented” audiences from tech giants, and forced brands to think more deeply about the value exchange they offered their users. Organizations that invested early in these internal data ecosystems found themselves better protected against the volatility of the ad-tech market. However, as the dust settles on this transition, a disturbing trend is emerging. Many organizations are discovering that owning a massive database of customer records does not necessarily mean they actually understand who their customers are today. Defining the First-Party Data Illusion The “first-party data illusion” is the false sense of security that comes from having a large database of customer information. It is the belief that because data is “ours,” it is inherently accurate, actionable, and representative of the current consumer. In reality, first-party data is often a collection of frozen moments in time—historical artifacts that may no longer correspond to the living, breathing human on the other side of the screen. Most marketing stacks are built on the assumption that once a piece of data is verified and stored, it remains a “truth” until it is explicitly updated. But the digital world does not stand still. Consumers are constantly rotating devices, updating their privacy settings, and changing their habits. The record in your CRM might say “active customer,” but the reality might be an abandoned email inbox or a user who has shifted their primary digital identity to an entirely different ecosystem. When marketing leaders rely on this illusion, they make decisions based on a distorted map. This leads to campaigns that reach fewer people than expected, personalization efforts that miss the mark, and measurement models that look precise on a dashboard but fail to drive real-world revenue. The Rapid Decay of Information: When Data Becomes History One of the most overlooked characteristics of customer data is its shelf life. Data is not a permanent asset; it is a perishable one. The moment a customer provides their information—whether through a newsletter sign-up, a whitepaper download, or a product purchase—that data is at its peak accuracy. From that point forward, its value begins to erode. In the industry, we often talk about “data decay.” Statistically, B2B data decays at a rate of nearly 30% per year as people change jobs and companies. In the B2C world, the decay is more subtle but equally damaging. Consumers frequently create “burner” email addresses for one-time discounts. They graduate from university and lose access to student accounts. They move to different cities, change their surnames, or simply evolve from being a “Gmail person” to an “Apple Mail person.” The result is that your first-party database is constantly shifting from the present tense to the past tense. The record still exists, the “ID” is still in your system, and your automated workflows are still firing. But the certainty surrounding that identity is loosening. Without a mechanism to refresh and validate this data, companies end up marketing to a graveyard of digital identities. The Distance Between Records and Reality Modern marketing infrastructure is designed around the concept of the “Unified Customer Profile.” CDPs and identity graphs are sold on the promise of stitching together fragmented signals—a website click here, an app login there, a support ticket from last month—into a single, coherent view of the customer. When these systems work, they are incredibly powerful. They allow for the kind of seamless, omnichannel experiences that consumers have come to expect. However, the integrity of these systems is entirely dependent on the quality of the “anchors” that connect them. Usually, these anchors are identifiers like an email address, a phone number, or a hashed login credential. The challenge arises when those anchors drift. If an identity graph is trying to reconcile signals using an email address that the consumer only checks once every three weeks, the “unified” profile becomes a fragmented mess. The system might technically perform its job—connecting the data it sees—but it lacks the visibility to know that the consumer has moved on. Marketing leaders often sense this gap when their analytics show high “match rates” but low conversion rates. The database reflects what was known at the time of collection; the customer reflects what is happening right now. Bridging this gap requires moving beyond static attributes and looking for more dynamic indicators of life. The Vital Importance of Activity Signals If static records are the problem, “activity signals” are the solution. Forward-thinking organizations are beginning to realize that the most important question they can ask about a customer is not “What is their name?” or “What did they buy two years ago?” but rather, “Is this identity still active in the digital ecosystem?” Activity signals provide a real-time pulse check on a customer record. Instead of relying solely on the data stored in a private silo, these signals look at the broader behavior of an identifier across the open web. Key questions answered by activity signals include: 1. Is this email address currently being used for authentications or transactions elsewhere? 2. Does this identity appear in recent digital interactions across a wide network of providers? 3. Are the behavioral patterns associated with this ID consistent with a real human being, or do

Uncategorized

Google AI Mode Goes Personal, Crawl Limits Clarified – SEO Pulse via @sejournal, @MattGSouthern

The Evolution of Search: Google’s Shift Toward Personal Intelligence The digital landscape is undergoing a foundational shift as Google transitions from a traditional search engine into a comprehensive personal AI assistant. In recent updates, Google has expanded its “Personal Intelligence” features to free users, moving these advanced capabilities out of the exclusive domain of Gemini Advanced subscribers. This move marks a significant milestone in how everyday users interact with the web and their own data. Personal Intelligence in the context of Google Gemini refers to the AI’s ability to access and synthesize information from a user’s personal ecosystem, including Google Drive, Gmail, and Google Docs. By opening these extensions to free users, Google is democratizing access to agent-like behavior. Users can now ask the AI to find specific details in an old email, summarize a long document stored in Drive, or even cross-reference travel itineraries without manually digging through their inbox. For SEO professionals and digital marketers, this shift suggests a move toward a more “walled garden” approach to information retrieval. When an AI provides an answer based on a user’s private data, the need for an external web search diminishes for that specific query. This highlights the growing importance of being integrated into the user’s workflow rather than just being a destination on a results page. Understanding Google Gemini Extensions for Free Users The rollout of extensions to free users allows Gemini to interact with various Google apps. These include: Google Workspace Integration This is perhaps the most impactful update. Users can prompt Gemini to “Find my lease agreement in Drive and tell me when the notice period begins” or “Summarize the last three emails from my project manager.” This level of utility encourages users to stay within the Gemini interface for longer periods, potentially shifting the starting point of their digital journey away from the standard search bar. Google Maps and Flights By integrating real-time data from Maps and Flights, Gemini can assist in planning trips that are personalized to the user’s location and preferences. For travel bloggers and local businesses, this means that visibility within Google’s core ecosystem is more critical than ever, as the AI draws on this structured data to formulate its personal recommendations. YouTube and Media The YouTube extension allows Gemini to scan video content to answer specific questions. This reinforces the need for creators to use clear titles, descriptions, and transcripts, as the AI uses these elements to understand and recommend content within a conversational interface. Technical SEO Deep Dive: Gary Illyes on Crawl Limits While the front end of Google is becoming more AI-driven, the back end still relies on the fundamental process of crawling and indexing. Gary Illyes, a prominent analyst on Google’s Search Relations team, recently provided much-needed clarification on the concept of “crawl limits” versus “crawl budget.” For years, the SEO community has debated the nuances of crawl budget, often fearing that Googlebot might “run out” of time to index their pages. Illyes clarified that for the vast majority of websites, crawl budget is not a primary concern. Instead, the focus should be on “crawl capacity” and “crawl demand.” Crawl Capacity: The Server’s Threshold Crawl capacity is essentially a limit designed to protect a website’s server. Googlebot is programmed to be a “polite” crawler. If Google perceives that your server is slowing down or returning error messages under the pressure of too many requests, it will automatically reduce its crawl rate. This is a protective measure to ensure that the bot does not crash the site for actual human visitors. Crawl Demand: Is Your Content Worth It? Crawl demand refers to how much Google actually *wants* to crawl your site. This is driven by two main factors: popularity and freshness. If a page is frequently updated or receives significant traffic and backlinks, Google’s demand to crawl that page increases. If a site has thousands of low-quality, stagnant pages, the demand will drop, regardless of the server’s capacity. The Practical Takeaway for Webmasters The clarification from Illyes underscores a vital point: technical SEO is not about “tricking” the bot into crawling more. It is about maintaining a high-performance server and ensuring that content is high-quality. If a site experiences indexing issues, the problem is more likely to be a slow server or a lack of content value rather than an arbitrary “limit” imposed by Google. The Impact of AI Overviews (AIO) on Search Traffic Trends One of the most talked-about changes in the SEO industry is the introduction of AI Overviews (formerly SGE). These AI-generated summaries appear at the top of the Search Engine Results Pages (SERPs), providing direct answers to user queries. New data is beginning to emerge regarding how these overviews affect organic traffic and click-through rates (CTR). Data Insights: Who is Losing Traffic? Early studies suggest that informational queries—those seeking quick facts, definitions, or simple explanations—are the most affected by AI Overviews. When the AI provides a comprehensive answer directly in the SERP, the “zero-click” search phenomenon increases. Websites that rely heavily on top-of-funnel informational content may see a decline in organic traffic. The Opportunity in Complexity Conversely, queries that require deep expertise, nuanced opinions, or transactional intent seem to be more resilient. AI Overviews often struggle with highly technical topics or subjective “best of” lists where personal experience (the extra “E” in E-E-A-T) is paramount. Furthermore, AI Overviews frequently include links to the sources used to generate the summary. This presents a new opportunity: appearing as a cited source within an AIO can lead to high-quality, high-intent traffic, even if the total volume of impressions on the traditional blue links decreases. Adapting Content Strategy for AIO To stay relevant in the age of AIO, publishers must focus on: – Providing unique data and primary research that AI cannot easily replicate. – Structuring content with clear headings and concise summaries that are easy for AI to parse and cite. – Focusing on long-tail, complex queries where users require more than a paragraph-long summary. The Synergy of

Uncategorized

The Content Moat Is Dead. The Context Moat Is What Survives via @sejournal, @DuaneForrester

The Content Moat Is Dead. The Context Moat Is What Survives via @sejournal, @DuaneForrester The End of the Traditional Content Moat For more than a decade, the recipe for digital success was relatively straightforward: create more content than your competitors, make it longer, and optimize it for specific keywords. This strategy created what marketers called a “content moat.” By sheer volume and topical coverage, a website could protect its rankings and authority, making it difficult for newcomers to break through. If you wrote the most comprehensive guide on a topic, you owned that topic. However, the landscape of the internet has undergone a seismic shift. With the advent of Large Language Models (LLMs) and Generative AI, the cost of producing “good” content has effectively dropped to zero. What used to take a human writer ten hours to research and draft can now be produced by an AI in ten seconds. As a result, the traditional content moat has dried up. When everyone can produce high-quality, long-form guides at the push of a button, “well-written” is no longer a competitive advantage. It is merely the baseline. According to insights from Duane Forrester and industry analysis via Search Engine Journal, we are entering an era where visibility in AI-driven search results depends on something far more elusive than information. It depends on context. The content moat is dead, and the context moat is the only thing that will survive the AI revolution. Why AI Killed the Informational Guide To understand why the content moat failed, we have to look at how search engines like Google and Bing are evolving. In the past, a search engine’s job was to point you toward a website that had the answer. Today, with Search Generative Experience (SGE) and AI Overviews, the search engine’s job is to provide the answer directly on the results page. If your website relies on providing “how-to” information, definitions, or generic summaries, you are now competing directly with the search engine itself. AI is exceptionally good at synthesizing public information. If your content is just a collection of facts that can be found elsewhere on the web, an LLM can summarize it perfectly, leaving the user with no reason to click through to your site. This is the death of the informational content moat. When content is commoditized, its value evaporates. We are currently seeing a glut of “AI-optimized” articles that all say the same thing in slightly different ways. For brands and creators, this leads to a “race to the bottom” where traffic declines despite high production volumes. To escape this, publishers must shift their focus from what they are saying to why it matters in a specific, irreplaceable context. Defining the Context Moat What exactly is a context moat? While a content moat is built on information, a context moat is built on experience, unique data, and situational relevance. Context is the “connective tissue” that links a piece of information to a specific human outcome or a proprietary insight that an AI cannot replicate because it doesn’t “live” in the world. A context moat is formed when you provide value that an AI cannot simulate through training data alone. This includes: 1. First-Hand Experience and “Proof of Work” AI can tell you how to fix a sink based on thousands of manuals it has read, but it cannot tell you how it felt when the pipe burst in your specific kitchen or the unique trick you used to solve a problem that wasn’t in the manual. Google’s emphasis on “Experience” in their E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) guidelines is a direct response to the need for a context moat. Readers—and search engines—now value the “I did this” factor over the “This is how it’s done” factor. 2. Proprietary Data and Original Research An LLM is a closed system based on historical data. It cannot predict the future, and it certainly doesn’t have access to your private company data, your customer surveys, or your internal experiments. By publishing original research and data-backed insights, you create a moat that AI cannot cross because it simply does not have the source material to work with. 3. Brand Voice and Counter-Intuitive Opinions AI is designed to be agreeable and middle-of-the-road. It aggregates the “average” opinion. A context moat is built by taking a stand, offering a contrarian view, or injecting a unique brand personality that resonates with a specific audience. When a reader seeks out your content because they want *your* specific take on a news item, you have successfully built a context moat. The Shift from Answers to Insights As Duane Forrester notes, the future of SEO and digital publishing isn’t about being an answer engine; it’s about being an insight engine. AI is the ultimate answer engine. It can tell a user the “what” and the “when.” Human creators must focus on the “why” and the “so what.” Consider a tech blog reviewing a new graphics card. An AI-generated article can list the specs, compare them to the previous generation, and summarize other reviews. That is a content moat. A context moat, however, would involve a reviewer testing that card in a specific, high-pressure environment—perhaps a 48-hour gaming marathon or a complex 3D rendering project—and explaining how the hardware’s heat output affected their specific workspace or how the drivers interacted with niche software. That lived experience provides context that a machine cannot synthesize. How to Build Your Context Moat Building a context moat requires a fundamental shift in how editorial teams operate. It moves away from keyword-first planning and toward insight-first planning. Here are the core strategies for building a moat that survives the AI era. Integrate Subject Matter Experts (SMEs) Deeply In the old model, a writer would research a topic and write an article. In the new model, the writer must interview a subject matter expert to extract “hidden” knowledge that isn’t available online. These nuances—the small details, the common pitfalls, the industry secrets—are the building blocks of context.

Uncategorized

Google releases March 2026 spam update

Google Initiates the First Major Spam Update of 2026 Google has officially announced the release of the March 2026 spam update, marking a significant shift in the search landscape for the new year. The update began rolling out today at approximately 3:20 p.m. ET. As the first dedicated spam update of 2026, this move signals Google’s ongoing commitment to refining its automated detection systems and purging low-quality, manipulative content from its search results. This release follows closely on the heels of the February 2026 Discover core update, making it the second major announced algorithm change of the year. For webmasters, SEO professionals, and site owners, the March 2026 spam update represents a critical period of volatility. While Google’s automated systems are constantly working in the background to identify and neutralize spam, these named updates usually involve significant improvements to the underlying technology, often targeting specific new trends in web manipulation. Timeline and Scope of the March 2026 Spam Update Google has indicated that the rollout of this update will be relatively swift compared to broad core updates, which can often take up to two weeks to fully propagate. According to official statements from Google’s Search Status Dashboard and their social communications on LinkedIn, the March 2026 spam update is expected to take “a few days” to complete its rollout. The scope of this update is global. It affects all languages and all regions simultaneously. This means that whether you are managing a local gaming blog in the United States or a multilingual tech news portal in Europe or Asia, your rankings could be influenced by these changes. Google has characterized this as a “normal spam update,” but in the context of the rapidly evolving AI-generated content landscape of 2026, “normal” still implies a high level of sophistication in how the engine distinguishes between value-add content and search engine results page (SERP) clutter. The Gap Between Updates: August 2025 to March 2026 It has been roughly seven months since Google’s last dedicated spam update, which concluded in August 2025. This seven-month window is noteworthy. Historically, Google tends to release spam updates when they have collected enough data on new spamming techniques to significantly retrain their AI-based detection systems, most notably SpamBrain. The transition from 2025 into 2026 has seen a massive surge in automated content creation and “parasite SEO” tactics. The length of time between the August 2025 update and the current March 2026 update suggests that Google has been refining its algorithms to better handle these increasingly complex methods of gaming the system. If your site has benefited from aggressive content scaling over the last half-year, this update may serve as a correction. Understanding SpamBrain and AI-Based Detection Central to these updates is SpamBrain, Google’s AI-based spam-prevention system. Launched years ago and continuously upgraded, SpamBrain does not just look for simple signals like keyword stuffing or hidden text. Instead, it utilizes machine learning to analyze patterns of behavior across millions of websites. SpamBrain is designed to identify: Scalable Content Abuse: Identifying sites that churn out thousands of pages of low-value content using automated tools or AI without sufficient human oversight or added value. Site Reputation Abuse: Often referred to as “Parasite SEO,” where high-authority sites host third-party content that has little to do with the main site’s topic, solely to leverage the host’s ranking power. Expired Domain Abuse: The practice of purchasing expired domains with high authority and repurposing them to host low-quality content in hopes of a quick ranking boost. The March 2026 update likely includes new training data for SpamBrain, allowing it to catch newer variations of these tactics that might have bypassed previous iterations of the algorithm. Why the March 2026 Spam Update Matters for Tech and Gaming Sites The tech and gaming niches are often at the forefront of SEO experimentation, making them particularly sensitive to spam updates. For tech blogs, content such as “best software” lists or “how-to” guides can sometimes fall into the trap of being overly templated or thin. In the gaming world, sites that aggregate patch notes, leaked information, or simple walkthroughs may find themselves under scrutiny if the content does not provide a unique perspective or original reporting. Google’s goal is to reward content that demonstrates E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness). Spam updates specifically target the opposite: content that exists purely to rank rather than to help the user. For gaming news sites, this means that “thin” articles generated solely to capture trending search terms without providing actual substance may see a decline in visibility as the update rolls through. Link Spam vs. Content Spam: What You Need to Know While Google has not specified that the March 2026 update is focused solely on links, it is important to understand how Google handles link-related spam. In their documentation, Google makes a clear distinction between general spam and link spam. If this update includes improvements to link spam detection, the impact on a site’s rankings can be permanent in a way that is difficult to “fix.” When Google’s systems identify spammy links—such as those from link farms, paid placements, or automated comment spam—the algorithm often chooses to simply ignore or “neutralize” those links. This means any ranking power those links were providing disappears. Unlike a manual action, where you can remove links and file a reconsideration request, an algorithmic neutralization of links cannot be undone by simply cleaning up your link profile. To regain those rankings, a site must earn new, legitimate, high-quality links to replace the lost “benefit” of the spammy ones. This is a crucial distinction for SEOs to remember: losing rankings in a link spam update isn’t always a “penalty”; it’s often just the removal of an unearned advantage. What to Do If Your Traffic Drops During the Rollout If you notice a sudden decline in your organic traffic or a drop in your keyword rankings between now and the end of the week, the March 2026 spam update is the most likely culprit. However, it is

Uncategorized

Reddit introduces collection ads, deal overlays, Shopify integration

The Strategic Shift: Reddit’s Evolution into a Direct Response Powerhouse For years, Reddit was viewed by digital marketers as the “final frontier” of social media advertising. While platforms like Meta, Instagram, and TikTok built robust, automated ecosystems for e-commerce, Reddit remained a sanctuary for discussion, community building, and—occasionally—brand skepticism. However, the tide has turned. Reddit has officially announced a suite of new Dynamic Product Ad (DPA) features designed to transform the platform from a research-heavy destination into a high-converting performance marketing channel. With the introduction of Collection Ads, Community and Deal overlays, and a long-awaited Shopify integration, Reddit is signaling its intent to capture a larger share of the performance advertising market. These updates arrive at a critical time when privacy changes on other platforms have made high-intent audiences harder to find. On Reddit, intent is baked into the ecosystem, and these new tools are designed to harvest that intent more efficiently than ever before. Understanding Reddit’s New Collection Ads: Bridging Discovery and Purchase The centerpiece of this update is the rollout of Collection Ads. This new Dynamic Product Ad format is specifically engineered to solve the “intent gap” between browsing and buying. In the past, advertisers on Reddit often had to choose between lifestyle-oriented brand awareness ads or clinical, product-focused conversion ads. Collection Ads merge these two worlds. The format pairs a large “lifestyle” hero image or video with a series of shoppable product tiles displayed in a carousel format below. This layout allows brands to tell a story while providing an immediate path to purchase. For example, a gaming hardware brand could feature a high-quality video of a professional streamer’s setup (the lifestyle hero) while simultaneously showcasing the specific keyboard, mouse, and headset used in the video (the shoppable tiles). Early data suggests this hybrid approach is working. According to Reddit’s internal metrics, early adopters who follow best practices for Collection Ads are seeing an average 8% lift in Return on Ad Spend (ROAS). This suggests that Reddit users are becoming more comfortable with shoppable content, provided it is presented in a way that aligns with the visual language of their favorite communities. The Power of Visual Context in Niche Communities What makes Collection Ads on Reddit different from similar formats on Instagram or Pinterest is the context of the subreddit. If a user is browsing r/Running, they are already in a mindset focused on gear, training, and performance. A Collection Ad from a footwear brand doesn’t feel like an intrusion; it feels like a recommendation. By using a hero image that reflects the aesthetic of the community, brands can bypass the typical “ad blindness” that plagues more traditional formats. Leveraging Social Proof: Community and Deal Overlays One of the most unique aspects of Reddit’s advertising evolution is the introduction of native overlays. Unlike standard banner ads, these overlays leverage the platform’s greatest strength: its community-driven authority. Reddit is introducing “Community” and “Deal” overlays that sit directly on top of product images, providing instant social proof. The “Redditors’ Top Pick” Label The “Redditors’ Top Pick” label is a game-changer for performance marketers. Reddit users famously value the opinions of their peers over the claims of a brand. In fact, 84% of shoppers say they feel more confident in their purchases after researching products on Reddit. By surfacing these native labels automatically, Reddit allows brands to capitalize on existing community sentiment. This label acts as a digital seal of approval, reducing the friction of the “Is this product actually good?” question that many consumers ask before checking out. Deal Overlays and Pricing Signals In addition to social proof, Reddit is simplifying the way brands communicate value through Deal overlays. These automatic discount callouts surface pricing signals directly on the ad unit without requiring the advertiser to manually update creative assets for every promotion. In an era of high inflation and price sensitivity, having a “15% Off” or “Limited Time Deal” badge clearly visible can significantly increase click-through rates (CTR) and conversion volume. The Shopify Integration: Streamlining the Path to Performance Perhaps the most significant technical update in this announcement is the new Shopify integration, currently in its alpha phase. Historically, one of the biggest barriers to entry for e-commerce brands on Reddit was the complexity of the technical setup. Setting up a product catalog and ensuring the Reddit Pixel was firing correctly across a complex store required developer resources that many small-to-medium businesses (SMBs) simply didn’t have. The Shopify integration simplifies this entire process. It allows merchants to sync their product catalogs directly with Reddit, automatically matching products to the right users and contexts. This integration handles the heavy lifting of: Automated Catalog Syncing: Ensuring that out-of-stock items aren’t advertised and that pricing is always accurate. Pixel Optimization: Simplifying the tracking of the customer journey from the first click to the final purchase. Smart Targeting: Utilizing Reddit’s internal algorithms to place products in front of users who have expressed interest in similar categories. By lowering the barrier to entry, Reddit is positioning itself as a viable alternative to the Google-Meta duopoly for Shopify merchants who are looking to diversify their traffic sources. By the Numbers: Why Performance Marketers Are Moving to Reddit The data behind Reddit’s advertising growth is compelling. The platform reported that its Dynamic Product Ads delivered an average 91% higher ROAS year-over-year in Q4 2025. This surge in performance is attributed to improved machine learning models and a more mature ad auction environment. Case Study: Liquid I.V. A standout success story in the Reddit DPA ecosystem is the hydration brand Liquid I.V. The company reports that Dynamic Product Ads already account for a staggering 33% of its total platform revenue on Reddit. Furthermore, these DPA campaigns are outperforming Liquid I.V.’s other standard conversion campaigns by 40%. This highlights that for brands with a broad appeal and a clear product-market fit, Reddit’s automated ad products are no longer just an “experiment”—they are a core revenue driver. The Cultural Shift: Shopping as a Conversation Why is this

Uncategorized

AI citations favor listicles, articles, product pages: Study

AI citations favor listicles, articles, product pages: Study The landscape of search engine optimization is undergoing a seismic shift. As generative AI becomes integrated into the way users find information, the traditional “ten blue links” are being supplemented—and in some cases, replaced—by AI-generated summaries. For digital marketers, publishers, and SEO professionals, the burning question has been: what kind of content does an AI choose to cite? A comprehensive new study from the Wix Studio AI Search Lab has provided the most data-driven answer to date. By analyzing over 75,000 AI-generated answers and more than one million citations across three major platforms—ChatGPT, Google AI Mode, and Perplexity—researchers have identified a clear hierarchy in the types of content that AI models prefer. The findings suggest that AI citations are not distributed randomly; instead, they heavily favor three specific formats: listicles, long-form articles, and product pages. This research marks a pivotal moment for content strategy. Understanding these preferences allows creators to move beyond guesswork and start “Generative Engine Optimization” (GEO) with precision. Here is a deep dive into the findings and what they mean for the future of digital publishing. The Power Trio: Listicles, Articles, and Product Pages According to the Wix Studio research, over half of all AI citations (52%) come from just three content formats. This concentration indicates that LLMs (Large Language Models) have developed a “preference” for structured, informative, and transactional content that mirrors how humans consume information online. Listicles emerged as the most cited format, capturing 21.9% of all citations. This is likely due to their inherent structure. Listicles provide clear headings, bullet points, and concise summaries, making it incredibly easy for an AI to parse information and present it to a user who is looking for comparisons or quick takeaways. Standard articles followed closely at 16.7%. These are typically long-form, informational pieces that provide depth, context, and expert analysis. When an AI needs to explain “why” or “how” something works, it turns to these comprehensive resources. Finally, product pages accounted for 13.7% of citations, serving as the primary source for transactional queries where specific features, prices, or availability are required. Why Listicles Dominate the AI Landscape The dominance of listicles is particularly striking in the realm of commercial intent. The study found that listicles captured 40% of commercial-intent citations—nearly double the share of any other content type. When a user asks an AI for the “best project management software” or “top-rated gaming laptops,” the AI is significantly more likely to pull data from a list-style article than from a deep-dive essay or a single product review. From an algorithmic perspective, listicles provide a high density of entities (brands, products, or locations) in a format that is easy to categorize. For SEOs, this means that the “top 10” format is not just alive and well; it is the cornerstone of visibility in AI-driven search results. Search Intent: The Primary Predictor of Citations One of the most significant takeaways from the Wix Studio AI Search Lab study is that user intent—not the specific industry or even the AI model being used—is the strongest predictor of which content gets cited. AI models have become highly sophisticated at matching the “job to be done” by the user with the format best suited to deliver that information. Informational Queries and Long-Form Authority For informational queries, where users are looking to learn or understand a concept, articles are the undisputed king. The study found that articles are cited 2.7 times more often than other formats for informational searches, holding a 45.5% share of these citations. Listicles still play a role here, accounting for 21.7%, often when the information is better served as a series of steps or facts. Commercial and Transactional Nuances As mentioned, listicles take the lead for commercial queries (40.9%). However, when the user’s intent shifts toward making a purchase (transactional) or finding a specific brand (navigational), the AI pivots toward product and category pages. Combined, these two formats make up roughly 40% of citations for these intent types. This suggests that while a listicle gets you “in the door” during the consideration phase, your product page is what seals the deal in the AI’s final answer. The Neutrality Bias: Third-Party vs. Self-Promotional Content A critical finding for brands is the AI’s preference for neutral, third-party editorial content over self-promotional materials. This is most evident in the professional services sector. The study revealed that third-party listicles (such as reviews from tech blogs or independent analysts) accounted for 80.9% of citations. In contrast, self-promotional lists—content created by a brand to rank its own services—accounted for only 19.1%. This indicates that LLMs are programmed or trained to prioritize perceived objectivity. If you are a SaaS company, an AI is far more likely to cite a “Top 10 CRM” list from an independent publication like Wired or Verge than a list on your own blog where you claim to be number one. This reinforces the importance of digital PR and backlink strategies; getting mentioned in third-party “best of” lists is now a primary requirement for appearing in AI search results. Model-Specific Differences: ChatGPT, Google, and Perplexity While the overall trends remain consistent, the study highlighted fascinating differences in how the major AI players curate their citations. Depending on where your audience spends their time, your content strategy might need subtle adjustments. ChatGPT: The Informational Educator OpenAI’s ChatGPT shows a heavy lean toward articles and educational content. It prioritizes depth and narrative, making it the most “traditional” in its citation habits. If your goal is to be cited by ChatGPT, focus on high-authority, long-form content that answers complex questions thoroughly. Google AI Mode: The Balanced All-Rounder Google’s AI Mode (often associated with Gemini and Search Generative Experience) showed the most balanced distribution across all content formats. Given Google’s vast index of the web and its long history with shopping and local search, it is adept at pulling from listicles, articles, and product pages with equal efficiency. It reflects a more “middle-of-the-road” approach that values variety.

Uncategorized

Google is tightening political content rules for Shopping ads starting April 16

A New Standard for Political Content in Digital Commerce In the lead-up to several major global elections, Google is making a decisive move to enhance transparency and security within its advertising ecosystem. Starting April 16, the tech giant will implement significantly tighter restrictions on political content specifically within Google Shopping ads. While political advertising has long been a scrutinized area for Search and YouTube, this latest update signals a major expansion into the realm of e-commerce and retail media. For years, Google Shopping has been a primary destination for consumers looking to purchase everything from electronics to apparel. However, as the line between retail products and political messaging blurs—think campaign t-shirts, hats, and printed materials—Google is moving to ensure that these items are held to the same rigorous standards as traditional campaign advertisements. This shift is not just a minor policy tweak; it is a fundamental change in how merchants must manage their product feeds and account verifications if they intend to sell items with political themes. The Specifics: What Is Changing on April 16? The core of this update involves a mandatory verification process for merchants whose Shopping ads contain what Google defines as “election-related content.” From the mid-April deadline, any merchant running ads that feature specific political content in nine targeted countries must be verified as an election advertiser. Failure to complete this process will lead to ad disapprovals and could potentially impact the standing of the Merchant Center account. Historically, Shopping ads were often seen as a “softer” territory for political content because they primarily focus on physical goods. However, Google is now closing the loop, ensuring that any ad format that can be used to influence or represent a political candidate, party, or issue is subject to the same level of disclosure. This means that if you are selling a “Candidate 2024” sweatshirt, your account must now prove its legitimacy through the same channels used by official campaign committees. Affected Jurisdictions: A Global Reach Google’s policy update is not a global blanket rule in terms of implementation, but it targets nine key regions where political discourse and e-commerce frequently intersect. Merchants operating in or targeting the following countries must pay close attention to the new requirements: Argentina Australia Chile Israel Mexico New Zealand South Africa United Kingdom United States In these regions, the requirement is verification. However, the situation in India is notably different. In India, Google will outright prohibit certain political Shopping ads entirely. This move likely stems from specific local regulatory environments and the upcoming general elections in the country, where the spread of political merchandise via automated ad platforms has been a point of contention for regulators. Why Google is Targeting Shopping Ads Now The timing of this policy shift is no coincidence. 2024 is often described as a “super-election year,” with more than half of the world’s population heading to the polls across various nations. Digital platforms are under immense pressure from governments and the public to prevent misinformation, foreign interference, and “dark money” from influencing voters. By bringing Shopping ads into the fold of election integrity efforts, Google is acknowledging that commerce is a form of expression. A promoted product listing for a political book, a piece of memorabilia, or even a satirical sticker pack can reach millions of users. Without verification, these ads could potentially be used to circumvent traditional campaign finance disclosures or transparency reports. By requiring verification, Google ensures that the “Paid for by” disclosures seen on Search ads will also have a counterpart in the transparency requirements for Shopping advertisers. Defining “Political Content” in a Retail Context For many merchants, the biggest question is: “Does my inventory count as political content?” Google’s definition of election advertising typically covers ads that feature a political party, a current elected officeholder, or a candidate for a federal or state office. In the context of Shopping ads, this applies to products that prominently feature these elements. Common examples of products that may trigger this policy include: 1. Official Campaign Merchandise Items directly sold by or on behalf of a campaign, such as yard signs, banners, and official apparel. These are the most obvious candidates for verification. 2. Third-Party Political Apparel Independent retailers selling shirts, hats, or accessories that support or oppose a specific candidate or party. Even if the merchant is not affiliated with a campaign, the content of the ad remains political. 3. Printed Media and Books Books authored by candidates or those that focus heavily on a specific political figure currently in office or running for office can sometimes trigger these flags if the marketing copy is deemed to be promoting a political agenda. 4. Advocacy Materials Products that promote specific legislative issues or “hot button” political topics that are closely tied to an ongoing election cycle in the affected countries. The Verification Process for Election Advertisers If your business falls into the category of an election advertiser, the verification process is not something that should be left until the last minute. Google requires several pieces of documentation to verify an identity. This process is designed to ensure that the person or entity paying for the ads is who they say they are. The steps typically involve: Identity Verification The account holder must provide government-issued photo identification. For organizations, this may include certificate of incorporation or other legal documents that prove the entity is registered in the country where they intend to run ads. Eligibility Checks Google will verify that the advertiser is a citizen or a legal resident of the country they are advertising in (or a locally registered entity). This is a critical step in preventing foreign interference in domestic elections. Transparency Report Inclusion Once verified, the data regarding these ads—such as who paid for them and how much was spent—will be made public in Google’s Political Advertising Transparency Report. This level of public scrutiny is a major deterrent for bad actors but a necessary step for legitimate merchants. Potential Challenges for Print-on-Demand (POD) Sellers One

Uncategorized

ChatGPT citations favor a small group of domains: Study

The Shift from Search Engines to Answer Engines For over two decades, search engine optimization has been a game of visibility on a linear results page. We optimized for keywords, tracked our rankings on Google, and fought for a spot in the coveted “top three.” However, the rise of Large Language Models (LLMs) like ChatGPT has introduced a new paradigm: the “Answer Engine.” In this new landscape, the goal isn’t just to rank; it’s to be cited as a trusted source within an AI-generated response. A groundbreaking study conducted by SEO expert Kevin Indig, utilizing data from Gauge, has revealed a startling reality about how ChatGPT selects its sources. The data suggests that AI citations are not a democratic distribution of the web’s knowledge. Instead, they are highly concentrated, favoring a very small group of authoritative domains. For digital marketers, publishers, and SEO professionals, this study serves as a blueprint for the next era of organic visibility. The Law of Concentration: 30 Domains Rule the Conversation One of the most significant findings of Indig’s research is the extreme concentration of citation visibility. According to the data, roughly 30 domains capture a staggering 67% of all citations within a given topic. This means that for the vast majority of queries, ChatGPT relies on a “inner circle” of sources to provide information to users. This concentration is even more pronounced in specific sectors. In product comparison topics, the top 10 domains alone accounted for 46% of all citations. By the time you reach the top 30 domains, they command 67% of the citation share. This creates a “winner-takes-most” environment that is even more restrictive than traditional search engine results pages (SERPs). Indig notes that in the world of AI search, you are effectively shut out unless you build enough topical authority to win one of a limited number of citation “seats.” Unlike Google, which might show ten blue links and various features, ChatGPT provides a synthesized answer that only has room for a few carefully selected references. If your brand isn’t perceived as a primary authority, your chances of appearing in the citation footprint are slim. The Gap Between Retrieval and Citation To understand how to optimize for ChatGPT, it is essential to distinguish between “retrieval” and “citation.” Just because an AI “reads” your page doesn’t mean it will credit your page. A secondary study by AirOps, referenced in Indig’s findings, highlights a massive gap between these two actions. The research found that ChatGPT retrieved approximately six times as many pages as it actually cited. Perhaps more concerning for publishers is the fact that 85% of the pages retrieved by the AI were never cited in the final response. This suggests that the AI uses a broad net to gather context but applies a much stricter filter when deciding which sources are worthy of being presented to the user. For SEOs, this means that merely being “crawlable” or “indexable” by an AI agent is only the first step. The content must possess a level of quality, structure, and authority that survives the AI’s internal vetting process. The AI is looking for the most definitive, well-structured, and comprehensive answer, often discarding hundreds of other pages that contain similar but less “authoritative” information. Does Ranking #1 on Google Still Matter? A common question in the SEO community is whether traditional rankings translate to AI citations. The study confirms that there is a strong correlation, but it is not a 1:1 relationship. Ranking #1 in Google remains a powerful signal of quality that ChatGPT respects. Pages that rank in the top position on Google were cited by ChatGPT 43.2% of the time. This is a significant advantage, as #1 ranked pages are 3.5 times more likely to be cited than pages ranking outside the top 20. However, the flip side of this statistic is that nearly 57% of the time, the top-ranked page on Google is *not* cited by ChatGPT. This discrepancy highlights a shift in how value is measured. Google’s algorithms may prioritize certain backlink profiles or historical signals, while ChatGPT’s retrieval-augmented generation (RAG) process looks for content that best fits the specific nuances of a conversational prompt. While a high Google ranking is a prerequisite for high visibility, it is no longer a guarantee of being the primary source for an AI’s answer. The Death of “One Keyword, One Page” For years, the standard SEO tactic was to create dedicated landing pages for specific, isolated keywords. Indig’s study suggests that this approach is largely ineffective for AI-driven search. ChatGPT rewards domains that demonstrate broad topical coverage and use cluster-based content models. The AI tends to favor pages that answer a question from multiple angles. This “cluster-based” approach means that a single, comprehensive guide that covers a topic in depth is more likely to be cited across a variety of related prompts than a series of thin pages targeting individual keywords. This shift is driven by how ChatGPT handles “fan-out queries”—follow-up or related questions generated by the AI to clarify a user’s intent. The study found that one-third of cited pages came from these fan-out queries. Interestingly, 95% of these queries had zero search volume in traditional SEO tools. Because these queries are generated dynamically by the AI, you cannot “research” them in the traditional sense. Instead, you must build content that is topically exhaustive, ensuring that no matter what direction the AI takes the conversation, your domain remains the most relevant source. The Strategic Importance of Content Length In the debate over short-form versus long-form content, the data leans heavily toward the latter when it comes to AI citations. Generally, longer pages earned more citations, though the effectiveness varied by industry vertical. The study identified a significant “lift” in citation probability for pages between 5,000 and 10,000 characters. The results became even more dramatic at the extreme end of the spectrum: Pages under 500 characters averaged only 2.39 citations. Pages exceeding 20,000 characters averaged 10.18 citations. However, this isn’t a simple “more

Scroll to Top