Uncategorized

Uncategorized

AI citations favor listicles, articles, product pages: Study

AI citations favor listicles, articles, product pages: Study The landscape of search engine optimization is undergoing a seismic shift. As generative AI becomes integrated into the way users find information, the traditional “ten blue links” are being supplemented—and in some cases, replaced—by AI-generated summaries. For digital marketers, publishers, and SEO professionals, the burning question has been: what kind of content does an AI choose to cite? A comprehensive new study from the Wix Studio AI Search Lab has provided the most data-driven answer to date. By analyzing over 75,000 AI-generated answers and more than one million citations across three major platforms—ChatGPT, Google AI Mode, and Perplexity—researchers have identified a clear hierarchy in the types of content that AI models prefer. The findings suggest that AI citations are not distributed randomly; instead, they heavily favor three specific formats: listicles, long-form articles, and product pages. This research marks a pivotal moment for content strategy. Understanding these preferences allows creators to move beyond guesswork and start “Generative Engine Optimization” (GEO) with precision. Here is a deep dive into the findings and what they mean for the future of digital publishing. The Power Trio: Listicles, Articles, and Product Pages According to the Wix Studio research, over half of all AI citations (52%) come from just three content formats. This concentration indicates that LLMs (Large Language Models) have developed a “preference” for structured, informative, and transactional content that mirrors how humans consume information online. Listicles emerged as the most cited format, capturing 21.9% of all citations. This is likely due to their inherent structure. Listicles provide clear headings, bullet points, and concise summaries, making it incredibly easy for an AI to parse information and present it to a user who is looking for comparisons or quick takeaways. Standard articles followed closely at 16.7%. These are typically long-form, informational pieces that provide depth, context, and expert analysis. When an AI needs to explain “why” or “how” something works, it turns to these comprehensive resources. Finally, product pages accounted for 13.7% of citations, serving as the primary source for transactional queries where specific features, prices, or availability are required. Why Listicles Dominate the AI Landscape The dominance of listicles is particularly striking in the realm of commercial intent. The study found that listicles captured 40% of commercial-intent citations—nearly double the share of any other content type. When a user asks an AI for the “best project management software” or “top-rated gaming laptops,” the AI is significantly more likely to pull data from a list-style article than from a deep-dive essay or a single product review. From an algorithmic perspective, listicles provide a high density of entities (brands, products, or locations) in a format that is easy to categorize. For SEOs, this means that the “top 10” format is not just alive and well; it is the cornerstone of visibility in AI-driven search results. Search Intent: The Primary Predictor of Citations One of the most significant takeaways from the Wix Studio AI Search Lab study is that user intent—not the specific industry or even the AI model being used—is the strongest predictor of which content gets cited. AI models have become highly sophisticated at matching the “job to be done” by the user with the format best suited to deliver that information. Informational Queries and Long-Form Authority For informational queries, where users are looking to learn or understand a concept, articles are the undisputed king. The study found that articles are cited 2.7 times more often than other formats for informational searches, holding a 45.5% share of these citations. Listicles still play a role here, accounting for 21.7%, often when the information is better served as a series of steps or facts. Commercial and Transactional Nuances As mentioned, listicles take the lead for commercial queries (40.9%). However, when the user’s intent shifts toward making a purchase (transactional) or finding a specific brand (navigational), the AI pivots toward product and category pages. Combined, these two formats make up roughly 40% of citations for these intent types. This suggests that while a listicle gets you “in the door” during the consideration phase, your product page is what seals the deal in the AI’s final answer. The Neutrality Bias: Third-Party vs. Self-Promotional Content A critical finding for brands is the AI’s preference for neutral, third-party editorial content over self-promotional materials. This is most evident in the professional services sector. The study revealed that third-party listicles (such as reviews from tech blogs or independent analysts) accounted for 80.9% of citations. In contrast, self-promotional lists—content created by a brand to rank its own services—accounted for only 19.1%. This indicates that LLMs are programmed or trained to prioritize perceived objectivity. If you are a SaaS company, an AI is far more likely to cite a “Top 10 CRM” list from an independent publication like Wired or Verge than a list on your own blog where you claim to be number one. This reinforces the importance of digital PR and backlink strategies; getting mentioned in third-party “best of” lists is now a primary requirement for appearing in AI search results. Model-Specific Differences: ChatGPT, Google, and Perplexity While the overall trends remain consistent, the study highlighted fascinating differences in how the major AI players curate their citations. Depending on where your audience spends their time, your content strategy might need subtle adjustments. ChatGPT: The Informational Educator OpenAI’s ChatGPT shows a heavy lean toward articles and educational content. It prioritizes depth and narrative, making it the most “traditional” in its citation habits. If your goal is to be cited by ChatGPT, focus on high-authority, long-form content that answers complex questions thoroughly. Google AI Mode: The Balanced All-Rounder Google’s AI Mode (often associated with Gemini and Search Generative Experience) showed the most balanced distribution across all content formats. Given Google’s vast index of the web and its long history with shopping and local search, it is adept at pulling from listicles, articles, and product pages with equal efficiency. It reflects a more “middle-of-the-road” approach that values variety.

Uncategorized

Google is tightening political content rules for Shopping ads starting April 16

A New Standard for Political Content in Digital Commerce In the lead-up to several major global elections, Google is making a decisive move to enhance transparency and security within its advertising ecosystem. Starting April 16, the tech giant will implement significantly tighter restrictions on political content specifically within Google Shopping ads. While political advertising has long been a scrutinized area for Search and YouTube, this latest update signals a major expansion into the realm of e-commerce and retail media. For years, Google Shopping has been a primary destination for consumers looking to purchase everything from electronics to apparel. However, as the line between retail products and political messaging blurs—think campaign t-shirts, hats, and printed materials—Google is moving to ensure that these items are held to the same rigorous standards as traditional campaign advertisements. This shift is not just a minor policy tweak; it is a fundamental change in how merchants must manage their product feeds and account verifications if they intend to sell items with political themes. The Specifics: What Is Changing on April 16? The core of this update involves a mandatory verification process for merchants whose Shopping ads contain what Google defines as “election-related content.” From the mid-April deadline, any merchant running ads that feature specific political content in nine targeted countries must be verified as an election advertiser. Failure to complete this process will lead to ad disapprovals and could potentially impact the standing of the Merchant Center account. Historically, Shopping ads were often seen as a “softer” territory for political content because they primarily focus on physical goods. However, Google is now closing the loop, ensuring that any ad format that can be used to influence or represent a political candidate, party, or issue is subject to the same level of disclosure. This means that if you are selling a “Candidate 2024” sweatshirt, your account must now prove its legitimacy through the same channels used by official campaign committees. Affected Jurisdictions: A Global Reach Google’s policy update is not a global blanket rule in terms of implementation, but it targets nine key regions where political discourse and e-commerce frequently intersect. Merchants operating in or targeting the following countries must pay close attention to the new requirements: Argentina Australia Chile Israel Mexico New Zealand South Africa United Kingdom United States In these regions, the requirement is verification. However, the situation in India is notably different. In India, Google will outright prohibit certain political Shopping ads entirely. This move likely stems from specific local regulatory environments and the upcoming general elections in the country, where the spread of political merchandise via automated ad platforms has been a point of contention for regulators. Why Google is Targeting Shopping Ads Now The timing of this policy shift is no coincidence. 2024 is often described as a “super-election year,” with more than half of the world’s population heading to the polls across various nations. Digital platforms are under immense pressure from governments and the public to prevent misinformation, foreign interference, and “dark money” from influencing voters. By bringing Shopping ads into the fold of election integrity efforts, Google is acknowledging that commerce is a form of expression. A promoted product listing for a political book, a piece of memorabilia, or even a satirical sticker pack can reach millions of users. Without verification, these ads could potentially be used to circumvent traditional campaign finance disclosures or transparency reports. By requiring verification, Google ensures that the “Paid for by” disclosures seen on Search ads will also have a counterpart in the transparency requirements for Shopping advertisers. Defining “Political Content” in a Retail Context For many merchants, the biggest question is: “Does my inventory count as political content?” Google’s definition of election advertising typically covers ads that feature a political party, a current elected officeholder, or a candidate for a federal or state office. In the context of Shopping ads, this applies to products that prominently feature these elements. Common examples of products that may trigger this policy include: 1. Official Campaign Merchandise Items directly sold by or on behalf of a campaign, such as yard signs, banners, and official apparel. These are the most obvious candidates for verification. 2. Third-Party Political Apparel Independent retailers selling shirts, hats, or accessories that support or oppose a specific candidate or party. Even if the merchant is not affiliated with a campaign, the content of the ad remains political. 3. Printed Media and Books Books authored by candidates or those that focus heavily on a specific political figure currently in office or running for office can sometimes trigger these flags if the marketing copy is deemed to be promoting a political agenda. 4. Advocacy Materials Products that promote specific legislative issues or “hot button” political topics that are closely tied to an ongoing election cycle in the affected countries. The Verification Process for Election Advertisers If your business falls into the category of an election advertiser, the verification process is not something that should be left until the last minute. Google requires several pieces of documentation to verify an identity. This process is designed to ensure that the person or entity paying for the ads is who they say they are. The steps typically involve: Identity Verification The account holder must provide government-issued photo identification. For organizations, this may include certificate of incorporation or other legal documents that prove the entity is registered in the country where they intend to run ads. Eligibility Checks Google will verify that the advertiser is a citizen or a legal resident of the country they are advertising in (or a locally registered entity). This is a critical step in preventing foreign interference in domestic elections. Transparency Report Inclusion Once verified, the data regarding these ads—such as who paid for them and how much was spent—will be made public in Google’s Political Advertising Transparency Report. This level of public scrutiny is a major deterrent for bad actors but a necessary step for legitimate merchants. Potential Challenges for Print-on-Demand (POD) Sellers One

Uncategorized

ChatGPT citations favor a small group of domains: Study

The Shift from Search Engines to Answer Engines For over two decades, search engine optimization has been a game of visibility on a linear results page. We optimized for keywords, tracked our rankings on Google, and fought for a spot in the coveted “top three.” However, the rise of Large Language Models (LLMs) like ChatGPT has introduced a new paradigm: the “Answer Engine.” In this new landscape, the goal isn’t just to rank; it’s to be cited as a trusted source within an AI-generated response. A groundbreaking study conducted by SEO expert Kevin Indig, utilizing data from Gauge, has revealed a startling reality about how ChatGPT selects its sources. The data suggests that AI citations are not a democratic distribution of the web’s knowledge. Instead, they are highly concentrated, favoring a very small group of authoritative domains. For digital marketers, publishers, and SEO professionals, this study serves as a blueprint for the next era of organic visibility. The Law of Concentration: 30 Domains Rule the Conversation One of the most significant findings of Indig’s research is the extreme concentration of citation visibility. According to the data, roughly 30 domains capture a staggering 67% of all citations within a given topic. This means that for the vast majority of queries, ChatGPT relies on a “inner circle” of sources to provide information to users. This concentration is even more pronounced in specific sectors. In product comparison topics, the top 10 domains alone accounted for 46% of all citations. By the time you reach the top 30 domains, they command 67% of the citation share. This creates a “winner-takes-most” environment that is even more restrictive than traditional search engine results pages (SERPs). Indig notes that in the world of AI search, you are effectively shut out unless you build enough topical authority to win one of a limited number of citation “seats.” Unlike Google, which might show ten blue links and various features, ChatGPT provides a synthesized answer that only has room for a few carefully selected references. If your brand isn’t perceived as a primary authority, your chances of appearing in the citation footprint are slim. The Gap Between Retrieval and Citation To understand how to optimize for ChatGPT, it is essential to distinguish between “retrieval” and “citation.” Just because an AI “reads” your page doesn’t mean it will credit your page. A secondary study by AirOps, referenced in Indig’s findings, highlights a massive gap between these two actions. The research found that ChatGPT retrieved approximately six times as many pages as it actually cited. Perhaps more concerning for publishers is the fact that 85% of the pages retrieved by the AI were never cited in the final response. This suggests that the AI uses a broad net to gather context but applies a much stricter filter when deciding which sources are worthy of being presented to the user. For SEOs, this means that merely being “crawlable” or “indexable” by an AI agent is only the first step. The content must possess a level of quality, structure, and authority that survives the AI’s internal vetting process. The AI is looking for the most definitive, well-structured, and comprehensive answer, often discarding hundreds of other pages that contain similar but less “authoritative” information. Does Ranking #1 on Google Still Matter? A common question in the SEO community is whether traditional rankings translate to AI citations. The study confirms that there is a strong correlation, but it is not a 1:1 relationship. Ranking #1 in Google remains a powerful signal of quality that ChatGPT respects. Pages that rank in the top position on Google were cited by ChatGPT 43.2% of the time. This is a significant advantage, as #1 ranked pages are 3.5 times more likely to be cited than pages ranking outside the top 20. However, the flip side of this statistic is that nearly 57% of the time, the top-ranked page on Google is *not* cited by ChatGPT. This discrepancy highlights a shift in how value is measured. Google’s algorithms may prioritize certain backlink profiles or historical signals, while ChatGPT’s retrieval-augmented generation (RAG) process looks for content that best fits the specific nuances of a conversational prompt. While a high Google ranking is a prerequisite for high visibility, it is no longer a guarantee of being the primary source for an AI’s answer. The Death of “One Keyword, One Page” For years, the standard SEO tactic was to create dedicated landing pages for specific, isolated keywords. Indig’s study suggests that this approach is largely ineffective for AI-driven search. ChatGPT rewards domains that demonstrate broad topical coverage and use cluster-based content models. The AI tends to favor pages that answer a question from multiple angles. This “cluster-based” approach means that a single, comprehensive guide that covers a topic in depth is more likely to be cited across a variety of related prompts than a series of thin pages targeting individual keywords. This shift is driven by how ChatGPT handles “fan-out queries”—follow-up or related questions generated by the AI to clarify a user’s intent. The study found that one-third of cited pages came from these fan-out queries. Interestingly, 95% of these queries had zero search volume in traditional SEO tools. Because these queries are generated dynamically by the AI, you cannot “research” them in the traditional sense. Instead, you must build content that is topically exhaustive, ensuring that no matter what direction the AI takes the conversation, your domain remains the most relevant source. The Strategic Importance of Content Length In the debate over short-form versus long-form content, the data leans heavily toward the latter when it comes to AI citations. Generally, longer pages earned more citations, though the effectiveness varied by industry vertical. The study identified a significant “lift” in citation probability for pages between 5,000 and 10,000 characters. The results became even more dramatic at the extreme end of the spectrum: Pages under 500 characters averaged only 2.39 citations. Pages exceeding 20,000 characters averaged 10.18 citations. However, this isn’t a simple “more

Uncategorized

Google is testing AI-generated animated video clips inside PMax

The Evolution of Creative Assets in Performance Max Google Ads is undergoing a radical transformation driven by generative AI, and the latest feature spotted in the wild suggests that the barrier to entry for video advertising is about to vanish. For years, digital marketers have known that video assets typically outperform static images in terms of engagement and conversion rates. However, the high cost of production, the need for specialized motion designers, and the time required to iterate on video content have kept many advertisers—particularly small to medium-sized businesses—on the sidelines. That landscape is shifting. Recent observations within the Google Ads interface reveal that Google is testing a new tool that allows advertisers to generate animated video clips directly within Performance Max (PMax) campaigns using only a single source image. This development marks a significant milestone in Google’s “AI-first” approach to advertising, effectively turning static asset groups into dynamic, multi-media powerhouses with the click of a button. The Discovery: AI-Generated Animation Spotted in PMax The feature was first brought to light by Nikki Kuhlman, Vice President of Search at JumpFly, Inc. While managing Performance Max campaigns, Kuhlman identified a new creative option within the asset group workflow. This feature allows the system to take a basic image—such as a brand logo, a product shot, or a real estate photo—and use artificial intelligence to enhance and animate it into a short video clip. This discovery confirms that Google is looking to automate the “creative” side of the house as aggressively as it has automated bidding and targeting. For advertisers who have historically struggled to provide the “Video” component required for a “Good” or “Excellent” Ad Strength rating in PMax, this tool could be the missing piece of the puzzle. How the AI Animation Workflow Works The process of generating these animated clips is designed to be frictionless, integrated directly into the standard asset upload flow. Based on early testing and observations, the workflow follows a specific sequence of AI-driven steps: 1. Source Image Selection Advertisers begin by uploading a high-quality source image. This can be a variety of brand assets, including company logos, product photography, or lifestyle shots. This image serves as the foundation for the AI’s generative process. 2. AI-Driven Image Enhancement Once the image is uploaded, the Google Ads AI doesn’t just animate the original file. Instead, it generates several “enhanced” versions of that image. This enhancement process might involve expanding the background (generative fill), adjusting lighting, or adding stylistic elements that make the image more suitable for a video format. 3. Generation of Animated Clips Each enhanced image then produces two distinct animated clips. The AI analyzes the content of the image to determine the most logical motion. For example, if the source is a logo, the AI might generate a 3D spin or a subtle pulse. If the source is a landscape or a property, it might create a cinematic “Ken Burns” style pan or zoom. 4. Selection and Implementation Advertisers have the agency to select up to five of these generated animated clips per asset group. This allows for creative testing within the PMax environment, as the algorithm will rotate these clips to find the versions that resonate most with specific audience segments. Critical Restrictions: The “No Faces” Rule One notable restriction identified during the testing phase is that source images containing human faces cannot currently be used for this specific animation feature. If an advertiser attempts to upload a portrait or a group shot, the AI-generation tool will likely be disabled for that specific asset. However, there is an interesting nuance: while the *source* image cannot have faces, the AI’s “enhanced” versions of a generic scene may sometimes generate people or figures to fill out the background or add life to a scene. This suggests that Google is maintaining a strict policy on person-based privacy and deepfake prevention regarding user-provided photos, while still allowing its generative engine to populate scenes with AI-synthesized humans where appropriate. Early Results and Visual Output Quality Initial feedback from the testing phase suggests that the outputs are surprisingly high-quality for an automated tool. The AI appears to be contextual; it understands what it is looking at and applies motion that feels natural to the subject matter. In one test case, a static logo was transformed into a professional-looking spinning animation. In another instance involving the real estate sector, a static photo of a house with a “Sold” sign was turned into a slow, cinematic pan that gave the viewer a sense of movement and scale. These types of micro-animations are perfect for the Google Display Network and YouTube Shorts, where subtle motion can catch a user’s eye more effectively than a static banner. Where Will These Ads Appear? While Google has not yet released official documentation detailing the full list of placements for these animated clips, evidence from ad previews suggests they are primarily targeting the Google Display Network (GDN). When these clips are added to an asset group, they begin surfacing in Display ad previews, providing a bridge between traditional static display ads and full-scale video ads. It is also highly likely that these assets will find their way into: YouTube Shorts: The vertical, short-form nature of these clips is a natural fit for the Shorts feed. Discover: Subtle animations in the Discover feed can significantly improve Click-Through Rates (CTR). Gmail: Animated assets can provide a more interactive feel within the promotions tab. Why This Matters for Modern Advertisers The introduction of AI-generated animation within PMax addresses one of the biggest “pain points” in digital marketing: the creative gap. Performance Max is an “all-or-nothing” campaign type; it performs best when it has a diverse range of assets (headlines, descriptions, images, and videos) to work with. Many advertisers run PMax campaigns with only static images. When they do this, Google often creates “auto-generated videos” which, historically, have been criticized for being low-quality slideshows of the advertiser’s images. By giving the AI the power to animate a single image with

Uncategorized

SEO’s biggest threat in 2026? Your own organization

The Internal Crisis of SEO in 2026 For decades, search engine optimization was defined by the external struggle: the battle against Google’s ever-changing algorithms and the fight for the top spot on a ten-blue-link results page. However, as we look toward the landscape of 2026, the primary threat to organic growth has shifted. It is no longer just about competing with other websites or keeping up with AI-driven search features. The most significant threat to a brand’s visibility today is the organization itself. The SEO industry has undergone a radical transformation. AI tools and generative search platforms have dominated the conversation for the last two years, fundamentally altering how users find information. But while the industry focuses on these technological shifts, many companies are rotting from within. Fragmented data, internal silos, outdated success metrics, and a lack of clear ownership are quietly sabotaging even the most sophisticated digital strategies. As SEO expands beyond the confines of a single website and into the vast ecosystem of AI discovery, the role of the SEO professional has become broader and more influential—yet harder for organizations to manage. To survive and thrive in 2026, companies must address the organizational friction that prevents them from executing at the speed of modern search. The Paradox of AI Over-Reliance In 2026, nearly every SEO team uses artificial intelligence for efficiency. We use it to generate content briefs, analyze massive datasets, and predict keyword trends. This is no longer a luxury; it is a necessity for survival. When an AI can produce a workable content brief in seconds, a human spending three hours on the same task is a liability. However, this efficiency creates a dangerous trap: the “sea of sameness.” The risk begins when teams rely on AI not just for speed, but for the entire creative and strategic process. If your organization asks the same prompts of the same Large Language Models (LLMs) as your competitors, you will inevitably receive the same output. “Acceptable” content is no longer enough to rank or to be cited by AI engines. In an era of infinite content generation, uniqueness is the only currency that matters. Without a distinct brand voice, a unique point of view, or proprietary data, your content becomes generic and indistinguishable from the background noise of the internet. Furthermore, there is a technical risk in trusting AI-driven analysis without human oversight. AI is exceptional at identifying patterns, but it is equally capable of “hallucinating” facts or misinterpreting data in a way that can lead to disastrous business decisions. Organizations that prioritize speed over quality—using AI for urgent analysis without verification—often find themselves building strategies on a foundation of errors. Competitive advantage in 2026 does not come from following the patterns that AI identifies; it comes from knowing when to break them. Navigating Fragmented Data and the Dark User Journey SEO professionals have historically complained about “dark data” and incomplete attribution, but the problem has reached a breaking point. In the past, we could reasonably map a user journey from a keyword search to a click, and then to a conversion. In 2026, that journey is shattered. The modern user journey often starts within an AI assistant—whether it’s ChatGPT, Claude, or a search engine’s integrated generative feature. Users are asking complex questions, comparing products, and narrowing down their choices before they ever think about clicking a link. By the time a user finally lands on your website, 80% of their decision-making process may already be complete. The issue? Most organizations have zero visibility into those initial steps. We are operating in a world of fragmented signals. While platforms like Microsoft Bing have introduced basic reporting for AI search visibility, the data remains limited. We cannot see the specific prompts that led to our brand being mentioned, nor can we accurately attribute the influence of an AI recommendation on a later direct-visit conversion. This lack of visibility makes it incredibly difficult for SEO teams to prove their value to stakeholders who still live and die by last-click attribution. Some forward-thinking organizations are attempting to close this gap by adding qualitative questions to lead forms, asking users exactly how they discovered the brand. While this provides some signal, it relies on human memory, which is notoriously unreliable. The organizational threat here is failing to adapt your attribution models to reflect this new reality. If your company still measures SEO success based on 2018 standards, you are essentially flying blind. The Danger of Outdated and Misaligned KPIs As the data landscape becomes more fragmented, many organizations are retreating to the comfort of the wrong KPIs. Despite years of education, many stakeholders still view “raw traffic” as the ultimate measure of SEO success. This mindset is a direct threat to strategic progress. Organic growth in 2026 isn’t always about driving more sessions; it’s about driving the right visibility. This has led to the rise of “AI visibility” metrics—tracking citations, mentions, and presence within LLM responses. While these are better than traditional traffic metrics in the current environment, they come with their own set of risks. Teams can easily become obsessed with improving visibility scores for prompts that have no actual business value. For example, appearing in an AI answer for a broad informational query like “What is project management software?” might look great on a report, but it is far less valuable than appearing for a high-intent query like “Which project management software is best for remote engineering teams?” Organizations often fail because they don’t tie these new visibility metrics to actual business outcomes. Without this connection, SEO teams end up optimizing for vanity rather than revenue. The complexity of tracking every possible AI prompt variation is a rabbit hole that can consume a team’s entire budget. The goal shouldn’t be to track every phrasing but to understand the underlying user intent. When leadership fails to define what success looks like in this new era, the SEO team is left chasing ghosts. The Ownership Crisis: Who Controls the Brand Footprint?

Uncategorized

Apple is bringing ads to Apple Maps this summer

A New Era for Apple’s Advertising Ecosystem For years, Apple has positioned itself as a sanctuary for privacy-conscious consumers, often contrasting its business model with those of data-driven giants like Google and Meta. However, the tech landscape is shifting. Apple is officially expanding its advertising footprint by bringing sponsored listings to Apple Maps this summer. This move marks a pivotal moment in the company’s evolution, signaling a more aggressive pursuit of Services revenue through high-intent, location-based advertising. The introduction of ads within Apple Maps is not merely a minor update; it is a strategic expansion of the Apple Ads platform. By opening up its navigation app to sponsored results, Apple is creating a new marketplace where local businesses, retailers, and global brands can compete for the attention of millions of users who are actively looking for products and services. This development follows years of steady growth in the App Store’s search ads business, proving that Apple is ready to monetize its most frequently used utility apps. How Sponsored Listings in Apple Maps Will Work The mechanics of Apple Maps ads will feel familiar to anyone who has managed a Google Maps or local search campaign. According to industry reports and insights from Bloomberg’s Mark Gurman, the system will operate on a bidding model. When a user enters a search query—such as “coffee near me” or “electrician”—businesses can bid for the top spot in the results list. These sponsored listings will likely be clearly labeled to distinguish them from organic results, maintaining a level of transparency for the user. Unlike traditional banner ads that can feel intrusive, these ads are designed to be contextual. They appear at the exact moment a user is expressing a specific need, making them one of the most effective forms of digital advertising. For example, a local boutique could appear at the top of the list when a user searches for “clothing stores,” providing a direct path to a physical storefront. Beyond simple search results, there is potential for these ads to appear in other areas of the Maps ecosystem, such as the “Find Nearby” suggestions or even within the detailed view of specific categories. As the platform matures, we may see more sophisticated targeting options based on general geographic areas and specific time-of-day triggers. The Timeline: From Apple Business Launch to Summer Ads The rollout of this new advertising channel is happening in distinct phases. Apple has confirmed that the foundation for this system is a new platform called Apple Business, which is scheduled to launch on April 14. This platform will serve as the central hub for business owners to manage their presence across the Apple ecosystem, including Maps, Siri, and Wallet. Once the Apple Business platform is live, businesses will have a window of time to claim their listings, update their information, and verify their locations. Following this setup period, the advertising functionality is expected to go live during the summer months. This timeline gives digital marketers and local business owners a critical few weeks to prepare their strategies before the first ads begin appearing on iPhones, iPads, and Mac devices worldwide. The web version of Apple Maps, which was recently expanded to support more browsers, will also likely feature these sponsored listings. This ensures that Apple’s ad reach extends beyond its hardware owners to anyone using its mapping services via a desktop or mobile browser. Why Apple is Moving into Map-Based Advertising The primary driver behind this move is the continued growth of Apple’s Services division. While hardware sales—particularly the iPhone—remain the cornerstone of the company’s finances, the Services sector has become a high-margin engine of growth. By diversifying its revenue streams to include more robust advertising options, Apple can provide more consistent value to its shareholders. Apple Maps is one of the most used apps in the world, with hundreds of millions of active users. It represents “bottom-of-the-funnel” traffic; when someone opens a map app, they are usually in the process of making a decision. They are looking for a place to eat, a store to visit, or a service to book. For Apple, leaving this high-intent traffic unmonetized was a missed opportunity, especially as Google has successfully monetized Google Maps for years. Additionally, the growth of the Apple Ads business (formerly known as Search Ads) has been explosive. By leveraging the same infrastructure that powers App Store ads, Apple can offer a seamless experience for existing advertisers. The infrastructure is already there; the Maps app is simply a new, highly valuable piece of digital real estate. The Privacy Angle: Maintaining the Brand Promise One of the biggest questions surrounding this move is how Apple will balance its advertising ambitions with its public commitment to user privacy. Apple has built a significant portion of its brand identity around being the “pro-privacy” alternative in the tech industry. To address this, the company is implementing strict data protocols for Apple Maps ads. Unlike competitors that may track a user’s entire browsing history to serve a map ad, Apple has stated that location-based ads in Maps will not be associated with a user’s Apple Account. Instead, the data used to serve the ad is processed on the device itself. Personal identifiers are not collected or stored by Apple, and the data is not shared with third-party advertisers. This “on-device” processing is a hallmark of Apple’s privacy strategy. It allows for relevant ad delivery—such as showing a user a nearby restaurant based on their current GPS coordinates—without creating a permanent profile of that user’s movements in the cloud. This approach allows Apple to compete in the digital ad space while still adhering to the privacy standards its customers expect. Why Digital Marketers and Local Businesses Should Care The entry of Apple Maps into the advertising space creates a massive new opportunity for local SEO and digital marketing professionals. For years, Google Maps has been the dominant player in local search advertising. The introduction of a viable competitor means that businesses now have a second

Uncategorized

Bing Webmaster Tools now links AI queries to cited pages

The Evolution of Search: Why AI Citations are the New Currency The landscape of search engine optimization is undergoing its most significant transformation since the invention of the crawler. As artificial intelligence becomes deeply integrated into the browsing experience, the traditional metrics of success—keyword rankings and blue-link click-through rates—are being joined by a new, more complex metric: citation visibility. Microsoft, a front-runner in this space with its integration of Copilot into Bing, has been at the forefront of providing webmasters with the data they need to navigate this new world. The recent update to Bing Webmaster Tools represents a pivotal moment for SEOs and digital publishers. Microsoft has officially introduced query-to-page mapping within its AI Performance report. This feature finally bridges the gap between what users are asking AI and which specific pages are being used to “ground” those answers. For the first time, webmasters can see a direct line of sight between a generative AI prompt and the source material it relies upon, turning what was once a “black box” of AI processing into an actionable map for content optimization. Understanding the AI Performance Report in Bing Webmaster Tools To appreciate the significance of the new mapping feature, it is essential to understand the foundation it was built upon. Microsoft launched the AI Performance report in early 2026, positioning it as the industry’s first dedicated dashboard for Generative Engine Optimization (GEO). While traditional reports focus on Search Engine Results Pages (SERPs), the AI Performance report focuses on how content performs within the context of AI-driven conversational interfaces, such as Bing Chat and Microsoft Copilot. Before this latest update, the dashboard provided two distinct sets of data: a list of “grounding queries” (the prompts users type into the AI) and a list of “cited URLs” (the web pages the AI used to generate its response). While useful, these data points existed in silos. A webmaster could see that a specific page was being cited frequently, but they couldn’t be entirely sure which specific user questions were triggering those citations. Conversely, they could see which queries were popular but couldn’t easily identify which of their pages were successfully satisfying those queries. The AI Performance report does not focus on traditional clicks. Instead, it measures “citation visibility.” In the AI web, a citation is a form of brand authority. Even if a user doesn’t click through to the website, the brand is credited within the AI’s response, establishing trust and influence. However, for those looking to drive traffic, understanding the link between the query and the page is the only way to refine a strategy that encourages deeper user engagement. Grounding Query-to-Page Mapping: How It Works The new functionality introduced by Microsoft is a “many-to-many” mapping system. This reflects the reality of how large language models (LLMs) function. A single complex AI query might draw information from three different pages on your site to synthesize a complete answer. Conversely, one comprehensive “ultimate guide” on your website might serve as the grounding source for hundreds of different long-tail AI queries. The update enables two primary workflows within Bing Webmaster Tools: 1. From Query to Source By clicking on a specific grounding query within the dashboard, webmasters can now see a list of every page on their site that the AI cited to answer that specific prompt. This is invaluable for understanding how AI interprets your content’s relevance. If you find that a query about “best gaming laptops for ray tracing” is citing your generic “laptop deals” page instead of your specific technical review, you have identified a clear opportunity for content refinement or technical SEO improvement. 2. From Page to Intent Alternatively, users can click on a cited URL to see a comprehensive list of every grounding query that led the AI to that page. This reveals the “search intent” of the AI web. It allows publishers to see the various ways users are interacting with their content via AI. A single article might be serving intents ranging from factual lookups to complex “how-to” advice, and seeing these queries listed helps creators understand the true value and reach of their existing assets. Why This Matters for Digital Strategy and SEO The shift from traditional search to AI-assisted search isn’t just a technical change; it’s a shift in user behavior. Users are no longer just searching for “best espresso machines”; they are asking AI to “compare the top five espresso machines for under $500 that have a built-in milk frother and fit in a small kitchen.” These are “grounding queries,” and they are far more specific and intent-rich than traditional keywords. Without query-to-page mapping, SEOs were essentially guessing. They could see that their visibility was up or down, but they couldn’t diagnose the “why.” This update provides several strategic advantages: Prioritizing Content Updates In the past, content audits were often based on which pages had the highest traffic. In the AI era, you should also prioritize pages that have high citation frequency for high-value queries. If a page is being cited as a primary source for a critical industry topic, that page becomes a high-stakes asset. Ensuring its information is up-to-date and its citations are accurate is now a top-tier SEO task. Eliminating Guesswork in GEO Generative Engine Optimization (GEO) is the practice of optimizing content so that it is more likely to be picked up by AI models. This often involves using clear, authoritative language, structured data, and direct answers to complex questions. With the new mapping tool, you can see exactly which “optimization experiments” are working. If you rewrite a section of a page to be more “AI-friendly” and suddenly see it being cited for a wider range of grounding queries, you have immediate proof of concept. Identifying Information Gaps By analyzing which queries *don’t* map to your preferred pages, you can identify content gaps. If users are asking questions about a specific feature of your product and the AI is citing a competitor or a third-party forum instead of your

Uncategorized

The entity home: The page that shapes how search, AI, and users see your brand

In the rapidly evolving landscape of digital discovery, the way brands are perceived has shifted from a simple ranking on a search engine results page (SERP) to a complex web of identity resolution. At the center of this web lies a single, often undervalued asset: the entity home. This page serves as the definitive anchor that dictates how search algorithms, AI models, and human users interpret your brand’s authority and purpose. For decades, the “About Us” page was a secondary thought for most SEO strategies. It was seen as a necessary but low-traffic destination that didn’t directly contribute to the bottom line. However, in the era of Generative AI and entity-based search, this page has become the single most important piece of real estate on your website. It is where algorithms resolve your identity, where bots map your digital footprint, and where users perform the final verification of trust before they convert. Data suggests that optimizing this specific page can have a direct impact on the bottom line. In controlled tests, improving the clarity and evidence-based claims of an entity home alone resulted in a 6% lift in conversions for visitors who reached it. The logic is clear: both the human visitor and the search algorithm are performing the same task—cross-referencing claims, validating evidence, and determining if the brand is trustworthy. If your entity home fails this test, the rest of your SEO strategy is built on a foundation of sand. What the entity home isn’t To master the concept of the entity home, we must first clear away the misconceptions that lead many marketing teams astray. It is easy to confuse identity resolution with traditional ranking tactics, but the two serve entirely different purposes. Not a ranking trick Success with an entity home does not look like a sudden traffic spike on your analytics dashboard next Tuesday. This isn’t a “hack” to boost your position for a specific keyword. Instead, the entity home builds “confidence priors.” It creates a baseline of trust that compounds over time. When an algorithm is 99% sure who you are and what you do, it is far more likely to recommend you across all its various surfaces, from traditional search to AI-driven voice assistants. Not just schema While Schema markup is a vital tool for communicating with machines, it is not a substitute for substance. Schema is simply the language used to describe the facts. If the page lacks clear claims, links to evidence, and consistent brand positioning, the Schema is nothing more than a well-formatted, empty declaration. You cannot “code” your way out of a lack of authority; the content must exist in human-readable form before it can be effectively structured for machines. Not always the About page While the “About” page is the standard entity home for most corporations, it is not a universal rule. For an individual, the entity home might be a page on a third-party website, a personal portfolio, or even a Wikipedia entry. The ideal URL is the one that provides the clearest identity statement, carries the highest internal link prominence from the rest of the site, and possesses a stable, long-term address. Choosing the wrong URL can fragment your identity across the web, making it harder for AI to “stitch” your brand together. Not enough without corroboration The entity home is where you make your claims, but the rest of the internet is where those claims are proven. The algorithm will only reach a high confidence threshold when what you say on your entity home matches what independent third-party sources say about you. Without external corroboration—press coverage, awards, professional associations, and peer mentions—your entity home is just a self-serving brochure that the algorithm may choose to ignore. Three audiences, one anchor Your entity home is a multi-functional tool that serves three distinct audiences simultaneously. Most brands fail because they optimize for only one of these groups, leaving the other two to guess at the brand’s true identity. First, there are the bots. Bots use the entity home as a compass when mapping your digital footprint. As they crawl the web, they look for a “source of truth” to help them interpret every other mention of your brand. If a bot finds a mention of your CEO on an industry blog, it returns to the entity home to confirm that person’s role and the organization’s relationship to that industry. Second, there are the algorithms. Unlike bots, which are primarily concerned with infrastructure and crawling, algorithms focus on identity resolution. They check the confidence of your brand’s claims at every gate of the search pipeline. Using frameworks like DSCRI (Discovery, Selection, Crawl, Render, Index) and ARGDW (Assess, Rank, Generate, Display, Win), the algorithm uses the entity home as the baseline against which all other signals are measured. Third, there are the humans. Human visitors reach for the entity home when they are looking for an authoritative resource. They aren’t looking for a sales pitch; they are looking for information that validates their instinct to trust you. A page structured to inform rather than to sell actually performs better at selling because it establishes the credibility necessary for a transaction to occur. The evolution of the entity home website There is a critical distinction between an “entity home page” and an “entity home website.” While the page anchors the identity, the website educates. A single page can declare who you are, but it cannot fully articulate the depth of your expertise, your network, or your history. A complete entity home website uses a structured cluster of pages to give the algorithm a 360-degree view of the brand. This structure should answer five key questions for the algorithm: Who is this entity? (The core identity and history). What does it do? (The primary services and products). Who does it work alongside? (Partners, peers, and professional networks). What has it produced? (Case studies, whitepapers, and intellectual property). Where do others confirm this? (Press mentions and independent corroboration). This shift in focus

Uncategorized

Google: 404 Crawling Means Google Is Open To More Of Your Content via @sejournal, @martinibuster

Understanding the Nuance of Googlebot Behavior In the world of search engine optimization, the sight of a 404 error in a Google Search Console report often triggers an immediate sense of panic. For years, the prevailing wisdom among digital marketers and site owners has been that 404 “Page Not Found” errors are a sign of neglect, a poor user experience, and a potential drain on a website’s SEO health. However, recent insights from Google’s Search Advocate, John Mueller, suggest that we should look at these errors through a different lens. Rather than being a strictly negative metric, the presence of Googlebot crawling 404 pages can actually be interpreted as a positive indicator of how Google views your website’s overall value and capacity. When Googlebot—the automated crawler used by Google to index the web—repeatedly visits URLs that return a 404 status code, it is engaging in a process of exploration. According to Mueller, this activity implies that Google is “open” to discovering more content on your domain. It suggests that the search engine has allocated a certain level of trust and crawl resources to your site, and it is actively looking for new or updated information, even if it occasionally hits a dead end. To understand why this is the case, we must dive deep into the mechanics of crawling, the concept of crawl budget, and the technical hierarchy of HTTP status codes. The Myth of the 404 Penalty One of the most persistent myths in SEO is that having 404 errors will directly penalize a website’s rankings. It is important to clarify that 404 errors are a completely normal part of the web. Sites evolve, products go out of stock, and articles are deleted. Google has stated multiple times that the mere existence of 404 errors does not lead to a site-wide ranking demotion. Google expects the web to change, and the 404 status code is the technically correct way to tell a search engine that a page no longer exists. The nuance lies in how Google allocates its crawling resources. If Googlebot is spending time visiting 404 pages, it means it is still very much interested in your site. If Google deemed a site to be low-quality or spammy, it would likely reduce its crawl frequency significantly. The fact that the bot is “knocking on doors” that are no longer there suggests it has the appetite to crawl more, provided you give it something worth indexing. Decoding John Mueller’s Insights on Crawling Capacity The core of this discussion stems from a conversation involving John Mueller regarding crawl spikes and the appearance of 404s in crawl logs. Mueller indicated that when Googlebot discovers a high volume of 404 errors, it isn’t necessarily a sign of a technical failure that needs “fixing” to save the site. Instead, it serves as a signal that Google has the capacity and the willingness to crawl the site more extensively. Think of it as a delivery driver. If a driver keeps stopping at an old address where a business used to be, it’s because their route still includes your neighborhood and they have the time to make the stop. If they didn’t care about your neighborhood or if their schedule was too tight, they would skip the stop entirely. In Google’s case, if Googlebot is hitting 404s, it means your “crawl limit” is high enough that Google can afford to check those old URLs just in case they have been resurrected or redirected. Crawl Budget: The Hidden Economy of SEO To fully grasp why 404 crawling is a positive sign of “openness,” we must discuss crawl budget. Crawl budget is the number of URLs Googlebot can and wants to crawl on your site within a specific timeframe. This budget is determined by two main factors: crawl rate limit and crawl demand. Crawl Rate Limit: This is a technical limit designed to ensure that Googlebot doesn’t overwhelm your server. If your server responds quickly, Googlebot increases the limit. If the server slows down or returns errors, Googlebot dials back. Crawl Demand: This is based on how much Google wants to crawl your site. Popular sites and sites with frequently updated content have higher crawl demand. When Googlebot crawls 404 pages, it is utilizing part of that crawl budget. If your site had a very low crawl demand or a restricted crawl rate limit, Google would prioritize only the most important, high-traffic pages. The fact that it is “wasting” resources on 404s indicates that your site has a surplus of crawl interest. Google is effectively saying, “We have checked all your important pages, and we still have room to check these older ones too.” Why Does Google Find 404 Pages in the First Place? Googlebot doesn’t just make up URLs out of thin air. If it is crawling a 404 page, it is because it found a link to that URL somewhere. There are several common sources for these “ghost” URLs: 1. Legacy Internal Links Perhaps you deleted a page months ago but forgot to remove a link to it from an old blog post or a footer menu. Googlebot follows every link it finds, and if that link is still present in your HTML, Google will continue to crawl it. 2. External Backlinks If another website links to a page on your site that no longer exists, Googlebot will follow that link from the external site to yours. This is one of the most common reasons for 404s. Even if you “fix” everything on your end, you cannot control what other sites do. This is why Google is so lenient with 404 errors; they know it’s often out of the webmaster’s control. 3. Old Sitemaps Sometimes, XML sitemaps are not updated correctly, or cached versions of old sitemaps linger in the system. Googlebot uses sitemaps as a roadmap, and if the roadmap contains old addresses, the bot will follow them. 4. URL Discovery via JavaScript or Social Media Google has become increasingly sophisticated at finding

Uncategorized

Why better signals drive paid search performance

In the modern landscape of digital advertising, the role of the PPC manager has undergone a seismic shift. We have moved away from the era of manual bid adjustments and granular keyword obsession, entering a period dominated by automation and machine learning. In this increasingly automated environment, paid search performance is constrained by a simple, inescapable reality: algorithms can only optimize toward the signals they are given. Consequently, improving those signals remains the most reliable way to improve results in a competitive market. While the concept of “better signals” sounds straightforward, its execution is where most advertisers struggle. Many accounts are still optimizing around vanity metrics or surface-level signals that do not reflect actual business outcomes. To succeed today, you must stop viewing the algorithm as a magic wand and start viewing it as a high-powered engine that requires high-octane fuel to run correctly. This fuel is your data. In this comprehensive guide, we will explore the inner workings of bidding algorithms, the specific signals you can influence, and the strategic framework required to align your data with real-world business growth. How bidding algorithms actually work Modern bidding systems, such as Google’s Smart Bidding or Microsoft Advertising’s automated solutions, are frequently described as “black boxes.” This terminology suggests that the systems operate mysteriously or according to whims that advertisers cannot understand. However, viewing these systems as a “black box” is counterproductive. To master paid search, you must understand the mechanics of the engine. At a high level, bidding algorithms are large-scale pattern recognition systems. They don’t “think” in the human sense; they calculate probabilities based on historical data and real-time context. Early iterations of automated bidding were relatively primitive, utilizing simple statistical methods, rules-based logic, and regression models. These systems were often reactive, looking at past performance to make future guesses. Over time, these evolved into more advanced machine learning approaches using decision trees and ensemble models. Today, these have become large-scale learning systems capable of processing thousands of contextual and historical inputs simultaneously. This is known as “auction-time bidding,” where the system evaluates the unique profile of every single search query in milliseconds. Today’s systems evaluate a massive array of signals, including: Query Intent: The specific phrasing and nuances of what the user is searching for. Device and Location: Where the user is and what hardware they are using. Time of Day: Historical conversion patterns related to specific hours or days of the week. User Behavior: Previous interactions with your website or similar brands. Competitive Dynamics: Who else is in the auction and what their historical behavior suggests. Despite this incredible complexity, the underlying mechanisms have stayed remarkably consistent. Bidding algorithms identify patterns tied to a desired outcome, estimate that outcome’s probability and expected value for each specific auction, and adjust the bid accordingly. They do not understand your business strategy, your quarterly goals, or your brand’s mission. They only infer success from the feedback loop you provide. When that feedback loop is weak, noisy, or misaligned with real business value, even the most advanced algorithms will efficiently optimize toward the wrong objective. Better technology does not compensate for poor inputs. The signals advertisers can influence While it is true that many signals used by Google and Microsoft are “inferred” and sit outside of an advertiser’s direct control, it is a mistake to think we are powerless. There is a meaningful set of levers that you control which directly shape how the algorithm learns. These inputs define the environment in which the “black box” operates. To influence performance, you must optimize the following areas: Account and campaign structure The way you group your data determines how much information the algorithm has to work with. If your structure is too fragmented, the algorithm suffers from “data sparsity,” meaning it doesn’t have enough conversions in a single bucket to find a pattern. Conversely, if it is too consolidated, you might be mixing audiences with vastly different behaviors, confusing the system. Bidding strategy selection Choosing between Target CPA (tCPA), Target ROAS (tROAS), or Maximize Conversions is essentially telling the machine which mathematical formula to prioritize. A mismatch here—such as using tCPA for a high-ticket item with a long sales cycle—can lead to stagnant performance. Budget allocation and risk management Budgets act as the boundaries of the algorithm’s “playground.” If a budget is too restrictive, the algorithm cannot “explore” new auctions to find cheaper conversions. Effective budget management involves balancing scaling with the risk of diminishing returns. Targeting and exclusions While automation handles much of the heavy lifting, exclusions (negative keywords, placement exclusions, audience exclusions) are vital. They act as the “guardrails,” preventing the machine from wasting spend on irrelevant traffic that might look good on paper but never converts. Ad creative and asset quality Creative is now a primary targeting signal. In modern systems, the language used in your headlines and descriptions helps the AI understand who your audience is. High-quality assets lead to better engagement, which in turn provides the algorithm with more positive data points to learn from. Landing page experience The algorithm doesn’t stop looking at the click. It monitors what happens next. A poor landing page experience leads to high bounce rates and low conversion rates, signaling to the algorithm that the traffic it sent was not valuable. This creates a downward spiral of lower bids and reduced visibility. Conversion data: The most important signal When paid search performance plateaus, the first instinct of many marketers is to blame the campaign structure or the creative. While those are important, the biggest lever available usually sits elsewhere: conversion data. In most modern accounts, conversion data is the single most influential signal you control. The conversion is the “North Star” for the bidding algorithm. It defines the successful outcome the system is trained to pursue. It directly informs prediction models, bid calculations, and learning feedback loops. If your conversion setup is flawed, the entire machine is broken. Common issues with conversion data include: Noisy Signals: Tracking “page views” as

Scroll to Top