Author name: aftabkhannewemail@gmail.com

Uncategorized

The real story behind the 53% drop in SaaS AI traffic

The Shift from Panic to Precision: Understanding the 53% Decline The software industry is currently navigating a period of intense volatility, recently punctuated by a phenomenon Wall Street has dubbed the “SaaSpocalypse.” This term emerged after investors, spooked by the rapid advancement of autonomous AI agents like Claude Cowork and the potential for these tools to replace traditional enterprise software, erased nearly $300 billion from SaaS market caps. Amidst this financial turbulence, new data has emerged showing a staggering 53% drop in AI-driven discovery sessions between July and December 2025. At first glance, this figure appears to confirm the worst fears of the industry: that the honeymoon phase for AI-driven software discovery is over. However, a closer look at the data reveals a much more nuanced story. This isn’t a narrative of AI’s failure, but rather a story of how AI is maturing, how user behavior is shifting toward integrated workflows, and why the “drop” is actually a reflection of standard B2B buying cycles. For SEO professionals and digital marketers in the tech space, the 53% decline is a distraction. The real story lies in the shifting distribution of traffic, the rise of workplace-embedded AI, and the critical technical gaps that are preventing SaaS companies from being discovered by the next generation of buyers. The Competitive Landscape: Copilot’s Meteoric Rise Between November 2024 and December 2025, SaaS websites recorded a total of 774,331 LLM-driven sessions. While ChatGPT remains the undisputed leader in volume, the growth rates of its competitors suggest a fundamental change in where and how users interact with artificial intelligence. SaaS AI Traffic by Source (Nov 2024 – Dec 2025) Source | Sessions | Share ChatGPT | 637,551 | 82.3% Copilot | 74,625 | 9.6% Claude | 40,363 | 5.2% Gemini | 15,759 | 2.0% Perplexity | 6,033 | 0.8% While ChatGPT captures over 82% of the traffic, its growth rate has stabilized at 1.42x. In contrast, Microsoft’s Copilot has seen an explosive 15.89x year-over-year growth. In late 2024, Copilot was a non-factor, driving a mere 148 sessions. By May 2025, that number had grown 20-fold. By the end of the year, Copilot solidified its position as the second-largest referrer of AI traffic to SaaS platforms. This growth is driven by proximity. Unlike ChatGPT, which requires a user to navigate to a separate tab or app to conduct research, Copilot is embedded directly into the Microsoft 365 ecosystem. When a business analyst is drafting a proposal in Word or a sales manager is projecting revenue in Excel, Copilot is there to answer questions like, “What CRM integrates best with our current stack?” or “Find me a project management tool for a 20-person team.” This “workplace-embedded AI” captures intent at the exact moment it occurs. It captures the “work” that ChatGPT never sees because the user never has to leave their primary workflow. The May 2025 surge in Copilot traffic suggests a mass realization among enterprise users that they could research and evaluate software without disrupting their current tasks. The “Internal Search” Bottleneck: Why 41.4% of Traffic is Landing on the Wrong Page One of the most revealing aspects of the recent data is where AI-driven users land when they finally click through to a SaaS website. The distribution is highly skewed, revealing a significant gap in how AI agents perceive and navigate software sites. Top SaaS Landing Pages by LLM Volume Page Type | LLM Sessions | % of AI Traffic | Penetration vs Site Avg Search | 320,615 | 41.4% | 8.7x Blog | 127,291 | 16.4% | 8.1x Pricing | 40,503 | 5.2% | 3.2x Product | 39,864 | 5.1% | 2.0x Support | 34,599 | 4.5% | 2.1x Internal search result pages are the dominant landing surface, capturing 41.4% of all AI traffic. This is more than the combined traffic of blog, pricing, and product pages. For a SaaS marketer, this should be a cause for concern. Users aren’t landing on search pages because search pages provide the best experience; they are landing there because the AI doesn’t know where else to send them. This is a “safety net” effect. When an LLM like ChatGPT or Claude is asked a specific question about a software’s capabilities, it attempts to find a direct answer. If the product or pricing pages lack clear, structured data that the AI can parse, the AI defaults to the site’s internal search bar. It assumes that the search schema will generate a relevant list of results even if a specific, high-value page isn’t indexed or understood. Internal search page penetration is 8.7x the site average. This is not a sign of optimization; it is a sign of a crawlability problem. The AI recognizes the search URL structure and trusts it as a fallback. However, internal search pages are often poorly formatted for conversion, providing paginated lists with minimal detail. If your highest-intent AI traffic is landing on a generic search result page, your conversion rates will inevitably suffer. Debunking the Decline: Seasonality and Fiscal Cycles The 53% drop in traffic from July to December 2025 has been used by some analysts to argue that AI discovery is a dying trend. However, when we overlay this data with traditional B2B buying behavior, the decline looks less like a crash and more like a standard seasonal rhythm. SaaS AI traffic peaked in July 2025 with 146,512 sessions. The subsequent months showed a steady decline: July 2025: 146,512 (Peak) August 2025: 120,802 (-17.5%) September 2025: 134,162 (+11.1%) October 2025: 135,397 (+0.9%) November 2025: 107,257 (-20.8%) December 2025: 68,896 (-35.8%) The drop-off in November and December was particularly sharp, mirroring the behavior across all major platforms. ChatGPT’s volume was slashed by half, and even the high-growth Copilot saw its traffic nearly halved. The reason for this is simple: AI-driven software discovery is a workplace activity. August is the height of the summer vacation season in the Northern Hemisphere. November includes the Thanksgiving holiday in the U.S., and December is dominated by the global end-of-year

Uncategorized

If SEO is rocket science, AI SEO is astrophysics

The landscape of search engine optimization has undergone a seismic shift. For decades, SEO professionals viewed their craft through the lens of “rocket science”—a complex but ultimately linear process of launching pages into the stratosphere of the SERPs (Search Engine Results Pages). You built a vessel, fueled it with keywords and backlinks, and hoped it reached the intended orbit. But as we transition into an era dominated by Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG), the metaphor must change. If traditional SEO is rocket science, AI SEO is astrophysics. In the world of Google AI Overviews and LLM-driven discovery, the goal is no longer just “getting there.” It is about understanding the fundamental laws that govern the semantic universe. Search is no longer a flat map of links; it is a multidimensional space where entities exert gravitational pull, and visibility is determined by density, relationship, and machine-verifiable truth. To succeed in this new environment, content must be more than just credible—it must be structured and reinforced so that machines can extract and reuse it with absolute confidence. Why traditional authority signals worked – until they didn’t For a long time, the industry relied on E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness) as a spiritual guide. SEOs optimized author bios, showcased credentials, and polished “About” pages. The theory was that these signals would tell Google that a site was a trustworthy source. However, in practice, we all knew what truly moved the needle: backlinks. External validation via links was the hard currency of the web. E-E-A-T helped a site look the part, but links provided the actual power. This arrangement worked as long as authority could be vague. If a site had enough links, Google was willing to “infer” authority. But in AI-driven retrieval, inference is a liability. Systems like ChatGPT, Claude, and Gemini don’t just acknowledge your authority; they have to use it. They extract your facts, summarize your insights, and integrate your data into their answers. If your authority cannot be located, verified, and extracted within a semantic system, it simply won’t shape the retrieval process. Being authoritative in a way that machines cannot verify is like being “paid” in experience. It might feel good, but it doesn’t pay the bills in terms of traffic or visibility. AI systems prioritize utility over prestige. If a model cannot confidently attribute a fact to you because your entity data is fragmented or your content structure is opaque, it will move on to a source that is easier to parse, even if that source has less “prestige” in the eyes of a human reader. How AI systems calculate authority Modern search no longer operates on a flat plane of keywords. Instead, AI-driven systems rely on a high-dimensional semantic space. This space models the relationships between entities (people, places, things, and concepts) and calculates their proximity to one another. In this environment, entities function like celestial bodies. Their influence is defined by their mass, their distance from other entities, and how they interact with the surrounding “matter” of the web. In AI Overviews and similar retrieval systems, visibility does not hinge on brand recognition alone. Recognition is a symptom of entity strength, not the source of it. What matters is whether a model can locate your entity within its semantic environment and whether that entity has accumulated enough “mass” to exert gravitational pull on a query. This semantic mass is built through three primary pillars: 1. Third-party corroboration Models don’t “trust” in the human sense; they calculate statistical probability. If your claims are echoed, cited, and reinforced across a broad corpus of high-quality data, your entity gains mass. Every independent reference adds weight, making it harder for the system to ignore you when a relevant query enters its orbit. 2. Machine-legible structure Authority must be extractable. This means using consistent authorship, clear schema markup, and explicit entity relationships. If the model can’t tell which “John Smith” wrote the article or whether “Acme Corp” is a software company or a hardware provider, the entity mass is fragmented and weakened. 3. Density over size In astrophysics, a gas giant might be enormous but have less gravitational pull on its surroundings than a smaller, much denser neutron star. AI visibility works the same way. A legacy publisher might have millions of pages, but if their authority is spread thin across too many unrelated topics, their “density” on a specific subject might be low. Conversely, a niche brand that is consistently reinforced as an expert in one specific area will exert a much stronger pull on relevant queries. The E-E-A-T misinterpretation problem The fundamental issue with E-E-A-T was never the concept itself, but how it was operationalized. Many SEOs treated E-E-A-T as a checklist of on-page trust signals: “Add an author photo, link to a LinkedIn profile, and mention our 20 years of experience.” These were signals a site applied to itself. They were easy to audit, which made them popular, but they did little to change how authority was actually conferred by the algorithm. These surface-level markers fail in LLM retrieval because they don’t provide the external reinforcement required to give an entity real mass. In a semantic system, compliance is not comprehension. Just because you followed the “checklist” doesn’t mean the model understands who you are or why you should be prioritized. Models aren’t evaluating your intent or your presentation; they are evaluating semantic consistency and whether your claims can be cross-verified elsewhere. E-E-A-T isn’t outdated—it’s just incomplete. It explains why a human might trust you, but it doesn’t provide the statistical density that a machine needs to include you in a retrieval-augmented generation (RAG) pipeline. Applying E-E-A-T principles only within the four walls of your own website is a strategy for the past. To win today, you must ensure your E-E-A-T is reflected in the broader web corpus. AI doesn’t trust, it calculates We must bridge the gap between human trust and machine confidence. Human trust is often emotional and based on charisma,

Uncategorized

How social discovery shapes AI search visibility in beauty

How social discovery shapes AI search visibility in beauty The landscape of digital discovery is undergoing a seismic shift. For years, the path to a beauty purchase was linear: a consumer realized a need, went to a search engine, and evaluated results. Today, that journey has fractured into a complex ecosystem where visibility in Artificial Intelligence (AI) search results is rarely earned at the point of the query. Instead, AI search visibility is the byproduct of a brand’s presence across social discovery platforms long before a user ever types a prompt into ChatGPT or Google’s Gemini. In the beauty industry, where visual proof and peer validation are the primary currencies, brands that dominate generative answers are those that have already been vetted, discussed, and reinforced by human communities. By the time a user turns to an AI assistant for a recommendation, the “decision” has often been influenced by signals formed upstream. To understand the future of SEO, we must look at how social discovery acts as the foundational layer for AI search visibility. Discovery didn’t move to AI – it fragmented There is a common misconception that search behavior has simply migrated from Google to AI tools. In reality, discovery has fragmented across a multitude of platforms, with AI functioning primarily as a mid-funnel tool for consideration and synthesis rather than raw discovery. The signals that determine which brands an AI model recommends are cultivated in the “upstream” environments of social media and community forums. According to research from eMarketer, roughly two-thirds of U.S. consumers now utilize social platforms as their primary search engines. This shift is not exclusive to Gen Z; it reflects a broader change in how all demographics validate information. Users are increasingly skeptical of traditional advertisements and are turning to platforms like TikTok, YouTube, and Reddit to find “real” answers. Because AI models are trained on this web-scale data, the brands that appear most frequently in social conversations are the ones the AI identifies as authoritative. If a beauty brand waits until a user is interacting with an AI search engine to attempt to influence their demand, the window of opportunity has already narrowed significantly. The AI is not discovering new brands on its own; it is reflecting the existing consensus found on the social web. This makes social discovery the most critical component of a modern AI visibility strategy. The hierarchy of citation sources A deep dive into how AI models cite their sources reveals a clear preference for community-driven content. In a recent study conducted with a beauty brand partner, data showed that Reddit, YouTube, and Facebook ranked among the most frequently cited domains in both Google’s AI Overviews and ChatGPT. This data highlights a crucial trend: AI engines prioritize third-party validation over brand-owned marketing copy. While many brands view Reddit as a “risky” or “anti-brand” environment, it remains one of the most powerful engines for AI search visibility. The authentic, unfiltered nature of Reddit discussions provides the high-signal data that Large Language Models (LLMs) crave. YouTube, similarly, serves as a massive repository of expert and enthusiast content. Despite being a video-first platform, its transcripts and descriptions are primary targets for AI crawlers looking to triangulate trust and authority in the beauty space. The volume reality: Social behavior still outpaces AI While AI search is a strategically vital frontier, it is important to maintain perspective regarding its current scale compared to social discovery. Headlines often tout the billions of prompts processed by AI daily, but when we look at measurable business outcomes—specifically referral traffic and transactions—the social giants still dominate the landscape. Social search behavior is habitual. On platforms like TikTok, search activity is embedded into the user experience. Research indicates that nearly 40% of TikTok users search the platform multiple times every day, and a staggering 73% search at least once daily. This represents a level of daily utility that AI search tools have yet to reach for the average consumer. The disparity becomes even clearer when examining referral data. A working paper from the University of Hamburg and the Frankfurt School analyzed 973 ecommerce sites and found that ChatGPT referral traffic accounted for only about 0.2% of total sessions. In that same dataset, organic search traffic from Google was approximately 200 times larger than referrals from organic LLMs. This suggests that while AI is excellent at providing answers, it is not yet a primary driver of direct “click-through” traffic in the way that social and traditional search engines are. For beauty brands, this means that the immediate ROI still lives in social discovery and traditional search, even as the long-term strategic value shifts toward AI optimization. The goal for 2026 and beyond is to win the social conversation to ensure that when AI usage inevitably scales, your brand is already the established answer. The validation loop: Why AI needs social To succeed in the age of AI, marketers must understand that LLMs are not primary sources of truth. They are consensus mirrors. They analyze vast amounts of data to determine what “people” think is the best solution for a specific problem. This creates a validation loop: social media creates the consensus, and AI reflects it. AI systems are programmed to be skeptical of brand-owned properties. Research has shown that only about 25% of sources cited in AI-generated answers come from brand-managed websites. The AI “knows” that a brand’s own website will be biased. Therefore, it looks for external validation from independent voices. The Reddit and Sentiment factor Data from OtterlyAI suggests that up to 6.4% of citation links in AI responses originate from Reddit, outperforming many traditional high-authority publishers. This is because the conversational nature of Reddit allows AI to gauge not just the presence of a brand, but the sentiment surrounding it. There is a measurable positive correlation between positive social media sentiment and visibility in AI search results. If a beauty product is being praised on r/SkincareAddiction for its efficacy on sensitive skin, an AI tool like Gemini is

Uncategorized

Local SEO sprints: A 90-day plan for service businesses in 2026

Local search remains the most potent driver of high-intent leads for service-based businesses. Whether you are a plumber in a bustling suburb or a law firm in a major metropolitan center, your ability to appear in the “Map Pack” is often the difference between a record-breaking month and a silent phone. However, the landscape of local SEO has shifted dramatically as we move into 2026. The era of “set it and forget it” local optimization is over. Google’s algorithms have become significantly more sophisticated, integrating AI-driven signals and tightening the reins on spam. To succeed today, business owners and digital marketers need a disciplined, structured approach. This 90-day sprint plan is designed to provide that structure, moving your business from invisibility to local authority through a series of focused, high-impact actions. Why local visibility is more volatile in 2026 Many service businesses find themselves in a frustrating cycle: they have a Google Business Profile (GBP) and a functional website, yet their organic lead flow has plateaued or declined. The primary reason is that the rules governing local visibility have been fundamentally rewritten. What worked in 2022 or 2023—such as simple keyword stuffing or occasional post updates—is no longer sufficient. The volatility we see in 2026 is largely a result of Google’s 2025 spam updates. These updates represented a massive effort to clean up map results by filtering out review spam, fake addresses, and businesses using keyword-stuffed titles. Google is now more aggressive in enforcing its “real-world” detail policy. If your digital footprint doesn’t perfectly mirror your physical business operations, the algorithm is likely to deprioritize your profile. Furthermore, the integration of AI-driven features in search results means that Google is looking for deeper context. It isn’t just looking for the word “HVAC”; it is looking for proof of expertise, service area confirmation, and genuine customer sentiment. This has led to a landscape where rankings can fluctuate even when a business owner hasn’t changed anything on their site. To combat this instability, a sprint-based approach is necessary to establish a foundation that can withstand algorithmic shifts. 3 lead levers that matter most for local search Before diving into the day-by-day plan, it is essential to understand the three core pillars—or “lead levers”—that dictate your success in local search. If your local SEO strategy is failing, one of these three levers is almost certainly broken. Strengthening all three creates a synergistic effect that drives both traffic and conversions. Lead Lever What it Means What it Changes Relevance Google clearly understands your specific services and the geographic areas you serve. Increases your frequency in the Map Pack for specific, high-intent queries. Prominence The “weight” of your business online, determined by reviews, links, and local mentions. Provides ranking stability and encourages higher click-through rates. Conversion How easily your website and GBP turn a casual browser into a booked appointment. Maximizes the ROI of your existing traffic by reducing friction. Google evaluates your business across these signals constantly. Durable relevance isn’t just about picking the right category; it’s about providing enough localized content and data points that Google views you as the definitive expert in your “service-plus-location” niche. The 90-day sprint plan A sprint is not a marathon, but it requires the same level of commitment. By breaking your local SEO efforts into 90 days, you create a manageable cadence that allows for data collection, implementation, and refinement. Here is how to execute that plan in 2026. Sprint warm-up (Days 1-3): Establish your measurement baseline The biggest mistake in local SEO is starting the work without knowing exactly where you stand. Guesswork is the enemy of ROI. Before you change a single word on your website, you must ensure your tracking is airtight. If you cannot trace a phone call or a form submission back to its original source, you cannot determine which part of your SEO sprint is actually working. The “warm-up” phase is about technical hygiene. Use the following checklist to ensure you are ready for the sprint. If you have “No” for any of these items, stop and fix them immediately. Item What “Done” Means Notes GA4 Setup Google Analytics 4 is installed and capturing conversion events. Essential for tracking user behavior on-site. Search Console GSC is verified and connected to your domain. This is where you see your actual ranking keywords. GBP Insights Baseline data from the last 3-6 months is saved. You need this to compare “before” and “after” sprint results. UTM on GBP Link A UTM parameter is added to your website URL in the GBP dashboard. Crucial for separating “GBP traffic” from “Standard Organic traffic” in GA4. Call Tracking A dynamic or static tracking number is used (e.g., CallRail). Critical for service businesses where the primary lead is a phone call. Form Tracking Each form submit is tracked as a unique conversion event. Don’t just track “page visits”; track “successful submissions.” Once your tracking is live, capture a “Baseline Snapshot.” Record your calls, website clicks, form submissions, and Google Search Console (GSC) impressions for the last 7 and 28 days. This is your starting line. Phase 1 (Days 4-10): Fix GBP fundamentals The Google Business Profile is the “front door” of your local digital presence. In 2026, Google uses the GBP as a primary entity-identifier. If the information here is vague or inaccurate, your website’s SEO will struggle to gain traction. Primary and Secondary Categories: This is the most common area for errors. Your primary category should be the most specific match for your “money maker”—the service that generates the most profit. If you are a roofing contractor, don’t just choose “Contractor.” Be precise. Secondary categories should only include services you actually perform. Avoid “category dilution,” which occurs when you add dozens of marginally related categories in a misguided attempt to rank for everything. This confuses the algorithm and weakens your authority in your core niche. Address and Service Area Reality: Google has become much stricter regarding Service Area Businesses (SABs). If you do not have a

Uncategorized

90 Days. 1 Plan. Improved Local Search Visibility [Webinar] via @sejournal, @hethr_campbell

90 Days. 1 Plan. Improved Local Search Visibility [Webinar] via @sejournal, @hethr_campbell Mastering Local Search in the Age of AI The landscape of digital marketing is shifting at a rapid pace. For brands with multiple locations, the challenge of maintaining visibility is no longer just about appearing in a standard list of search results. Today, local search visibility requires a sophisticated approach that accounts for traditional SEO, user experience, and the emerging influence of Artificial Intelligence (AI). Search engines are evolving into “answer engines.” With the rise of Google’s Search Generative Experience (SGE) and other AI-driven platforms, the way consumers find local businesses is changing. They are asking complex questions and expecting precise, localized answers. To stay competitive, businesses need a structured, actionable strategy. A 90-day plan provides the perfect timeframe to audit, implement, and optimize local search efforts to ensure a brand is not only visible but also AI-ready. The Core Importance of a 90-Day Local SEO Strategy Why 90 days? In the world of SEO, three months is the “sweet spot.” It is long enough to see the impact of technical changes and content updates, yet short enough to maintain momentum and pivot if data suggests a different direction. For multi-location networks, this timeframe allows for a systematic rollout across dozens or hundreds of locations without overwhelming internal teams. A unified 90-day plan ensures that every location in a network adheres to the same quality standards. It eliminates the “fragmented brand” problem where one branch has a perfect online presence while another is virtually invisible. By the end of this period, the goal is to have a cohesive, data-rich digital footprint that AI models can easily crawl and recommend to users. Month 1: Building a Rock-Solid Foundation The first 30 days of the plan are dedicated to assessment and cleanup. You cannot build a high-performing local SEO strategy on a foundation of inaccurate data. Comprehensive Local Audit The initial step is a deep dive into the current state of your local presence. This involves auditing every Google Business Profile (GBP), Bing Places for Business, and Apple Maps listing. You must identify inconsistencies in NAP (Name, Address, Phone number) data. Even a slight variation, such as “Street” versus “St.,” can sometimes confuse older algorithms, though modern AI is better at reconciliation. The goal is 100% accuracy across the web. Google Business Profile Optimization Google Business Profile remains the crown jewel of local search. During the first month, every profile in the network should be fully optimized. This goes beyond filling out the basic contact info. It includes: Selecting the most accurate primary and secondary categories. Writing compelling, keyword-rich business descriptions. Adding high-resolution photos of the interior, exterior, and products. Setting up “Attributes” (e.g., “Black-owned,” “Women-led,” “Wheelchair accessible”) which are increasingly used as filters in search. Claiming and Verifying Unmanaged Listings For multi-location brands, it is common to find “ghost” listings created by users or automated aggregators. Month one involves claiming these listings and merging duplicates. This prevents “ranking cannibalization,” where two listings for the same business compete against each other, ultimately lowering the visibility of both. Month 2: Content Strategy and Reputation Management Once the data foundation is secure, the focus shifts to the “signals” that tell search engines your business is active, trustworthy, and relevant to the local community. Hyper-Local Landing Pages A major mistake multi-location brands make is using the same generic content for every location page. To be AI-ready, your website needs to provide specific, local context. During month two, focus on creating or enhancing local landing pages. These pages should include: Location-specific descriptions and history. Testimonials from local customers. Local news or community involvement details. Embedded maps and clear directions. Unique service menus or product availability for that specific branch. The Power of Review Velocity and Quality Reviews are one of the most significant ranking factors for the “Local Pack.” However, it isn’t just about the star rating. Search engines and AI models look at “Review Velocity” (how often you get new reviews) and “Review Diversity” (what people are actually saying). During this phase, implement a system to encourage satisfied customers to leave feedback. Responding to reviews is equally critical. When you respond to a review, you are not just talking to one customer; you are providing “fresh content” for search engines to index. Use these responses to naturally incorporate local keywords and service names. Local Link Building and Citations While global backlinks matter, local SEO thrives on local relevance. Identify opportunities for links from local chambers of commerce, neighborhood blogs, and local news outlets. These signals act as a vote of confidence that your business is a legitimate part of the local ecosystem. Month 3: Optimization for AI and Advanced Technical SEO The final 30 days are about future-proofing. This is where the brand becomes “AI-ready.” AI models like ChatGPT, Gemini, and Claude rely on structured data to understand the world. If your data isn’t structured, these models may ignore you or, worse, provide incorrect information about your business. Implementing Advanced Schema Markup Schema markup (JSON-LD) is a specialized code that tells search engines exactly what your data means. For local businesses, “LocalBusiness” schema is essential. In month three, ensure that every location page has detailed schema that includes: Operating hours (including holiday hours). Price ranges. Service areas. Geo-coordinates (latitude and longitude). Menu URLs or booking links. By providing this data in a structured format, you make it significantly easier for AI-driven search engines to extract and display your information in “Instant Answers” or voice search results. Optimizing for Voice and Conversational Search As more users turn to voice assistants like Siri and Alexa, local search queries are becoming more conversational. Instead of searching for “Pizza NYC,” users are asking, “Where is the best gluten-free pizza near me that is open now?” In the final phase of the 90-day plan, review your content to ensure it answers these long-tail, conversational questions. FAQ sections on local landing pages are an excellent way to capture

Uncategorized

Discover Core Update, AI Mode Ads & Crawl Policy – SEO Pulse via @sejournal, @MattGSouthern

The Shifting Sands of Digital Visibility The digital marketing landscape is currently undergoing one of its most transformative periods since the advent of the search engine itself. As we look at the latest developments in the SEO sector, three major pillars are emerging as the primary focal points for publishers, developers, and brands: the extreme volatility within Google Discover following core updates, the aggressive expansion of ad inventory within AI-driven search modes, and the tightening of crawl policies as the battle for data ownership intensifies. For years, SEO was a relatively linear process of keyword optimization and backlink building. Today, it has evolved into a complex ecosystem where user intent, machine learning, and technical accessibility intersect. Staying ahead of these changes is no longer just about maintaining rankings; it is about survival in an era where AI is redefining how information is consumed and how revenue is generated. Google Discover and the Impact of Core Updates Google Discover has become a massive traffic driver for many publishers, often rivaling or even surpassing traditional organic search traffic. Unlike traditional search, which relies on a user entering a specific query, Discover is a highly personalized feed that pushes content to users based on their interests, browsing history, and behavioral patterns. However, recent core updates have introduced a level of volatility that has left many digital strategists scrambling to understand the new rules of engagement. The Volatility Factor in Discover Unlike the standard Search Engine Results Pages (SERPs), Discover is notoriously fickle. A site can see millions of impressions one day and virtually zero the next. This volatility is often tied directly to Google’s Core Updates. When Google adjusts its underlying ranking algorithms, the “interest-based” engine of Discover experiences significant tremors. Recent data suggests that the threshold for quality in Discover has been raised. Google is increasingly prioritizing content that demonstrates high levels of E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness). Sites that rely on “clickbait” headlines or thin content are finding themselves sidelined during these core updates. The challenge for SEOs is that Discover lacks the transparency of traditional search; there are no specific keywords to track, only categories and entities. Optimizing for the Discover Feed To mitigate the impact of core update volatility, publishers must focus on engagement metrics that signal value to the algorithm. This includes: 1. High-Quality Visuals: Google Discover is a visual-first medium. Articles must feature high-resolution images that are at least 1,200 pixels wide and enabled by the max-image-preview:large setting. 2. Compelling but Honest Headlines: Headlines must capture interest without resorting to deceptive clickbait, which Google’s AI is increasingly adept at identifying and penalizing. 3. Entity Relevance: Discover relies heavily on the “Knowledge Graph.” By ensuring your content is clearly mapped to specific entities (people, places, things, or concepts), you increase the likelihood of appearing in the feeds of users interested in those specific topics. The Expansion of AI Mode Ads As Google continues to integrate its AI-powered search experiences—formerly known as the Search Generative Experience (SGE) and now commonly referred to as AI Overviews—the company is also finding new ways to monetize these interactions. The introduction of AI Mode Ads represents a significant shift in the digital advertising landscape. How AI Ads Change the Search Experience In the traditional search model, ads are clearly delineated at the top and bottom of the organic results. In AI Mode, ads are being integrated directly into the generative responses. This means that when a user asks a complex question, the AI-generated answer may include sponsored products or services as part of the recommendation or explanation. For advertisers, this offers an opportunity to reach users at a high-intent stage of the funnel. For publishers, however, it presents a challenge. If the AI provides a comprehensive answer that includes an ad for a product, the user may never feel the need to click through to an organic website. This “zero-click” environment is forcing a rethink of how we value search presence. Strategies for the AI-Driven Ad Era Advertisers need to adapt their bidding and creative strategies to fit the conversational nature of AI Mode. Keyword matching is becoming broader, and the focus is shifting toward “contextual relevance.” If your product is mentioned in an AI Overview, the attribution models become more complex. Furthermore, publishers must decide whether to allow their content to be used to train these AI models or to be cited in these overviews. While being cited can provide a “brand lift,” it may not always translate to direct traffic if the ad inventory within the AI mode captures the user’s final action. Crawl Policy and the Rise of Bot-Facing Content Perhaps the most technical and contentious issue in the current SEO pulse is the evolution of crawl policies. As Large Language Models (LLMs) like ChatGPT, Claude, and Google’s Gemini require vast amounts of data for training, the web has become a battlefield for data scraping. The Scrutiny of Bot-Facing Content There is a growing trend of websites creating content specifically designed to be indexed by bots rather than read by humans. This “bot-facing” content is often used to manipulate AI training sets or to ensure that an AI model cites a specific brand when asked a related question. Google and other major search engines are increasing their scrutiny of these practices. Content that is deemed to be “bot-only” or created solely for the purpose of manipulating LLMs can lead to penalties. The goal for search engines is to maintain a “human-first” index. If the content does not provide value to a human reader, it shouldn’t hold value in the search or discovery ecosystem. Navigating Robots.txt and AI Scrapers Many publishers have begun to implement stricter crawl policies to protect their intellectual property. By updating their robots.txt files, sites can opt-out of specific AI crawlers (like GPTBot) while still allowing traditional search bots (like Googlebot) to index their pages for search results. However, this is a delicate balance. If a site is too restrictive, it may miss out on

Uncategorized

Why video is the canonical source of truth for AI and your brand’s best defense

The digital landscape is undergoing a seismic shift. For years, the internet has been treated as a massive, open-access library for web scraping—a “Wild West” where AI models could graze on any available data to learn and grow. However, that era is rapidly coming to an end. As generative AI becomes more pervasive, the quality of information found on the open web is degrading, leading to what many experts call “AI slop fatigue.” In response, major players are moving toward a more structured, verified approach to data. A prime example is OpenAI’s landmark deal with Disney. This partnership allows OpenAI to train its models on high-fidelity, human-verified cinematic content. For brands, this signals a major turning point: the transition from text-based dominance to video as the ultimate validator of brand identity. If you want to protect your brand from the distortions of generative AI, video is no longer just a marketing choice—it is your canonical source of truth. The Rising Threat of AI Brand Drift To understand why video has become so critical, we must first look at how Large Language Models (LLMs) treat information. When an AI model is asked about a brand and lacks specific, up-to-date data, it doesn’t simply say, “I don’t know.” Instead, it performs a process called interpolation. It fills in the gaps by guessing your brand’s story based on patterns found in similar companies, general industry trends, or outdated web archives. This phenomenon is known as “brand drift.” It occurs when an AI model narrates an inaccurate version of your business, which then gets repeated to thousands of potential customers. The danger here is that the AI speaks with high confidence, leading users to believe the misinformation is factual. Real-World Examples of Brand Distortion Brand drift is not a theoretical problem; it is already affecting businesses of all sizes. For instance, the SaaS company Streamer.bot has seen users arrive at their support channels with “confidently wrong” setup instructions generated by ChatGPT. The AI had constructed elaborate, phantom features, pricing tiers, and integration requirements that the product never actually offered. This forced the human team to spend valuable time correcting misinformation that the company had never published. Local businesses are equally vulnerable. In recent reports, restaurant owners have expressed frustration with Google’s AI Overviews, which have repeatedly shared false information regarding menu items, business hours, and daily specials. When the AI lacks a verified “source of truth,” it defaults to the most probable guess, often at the expense of the brand’s reputation. Why Video Functions as a Canonical Source of Truth In the world of SEO and AI training, a “canonical source” is the definitive version of a piece of content. Historically, this has been a website’s homepage or a primary blog post. However, in the age of AI, text is becoming increasingly easy to manipulate and misinterpret. Text-based files have “low entropy”—a statement like “50% off” looks identical whether it was written in 2015 or 2025. Text often lacks the physical “timestamp of reality,” making it easy for AI to lose the context of the real world. Video solves this by providing a high-density data environment. When you produce an authoritative video—such as a product demo that explicitly clarifies features or pricing—you are providing a massive amount of semantic information. A five-minute video recorded at 60 frames per second contains 18,000 individual frames of visual evidence, paired with a nuanced audio track and a precise text transcript. This high-density data allows AI models to capture non-verbal cues and visual proof that are often flattened or lost in written content. The video acts as a validation layer, overriding conflicting opinions from Reddit, old forum posts, or competitor-generated noise. For an AI model trying to verify a fact, the visual evidence of a human expert speaking or a product in motion is far more weighted than a string of text on a third-party site. Authenticity as a Technical Signal As deepfakes and AI-generated “slop” proliferate, authenticity is shifting from a vague moral concept to a hard technical requirement. Search engines and AI agents need a way to verify provenance: Is this video real? Does it actually come from the brand it claims to represent? For AI models, real-world human footage is the ultimate high-trust data source. Real-world light, physics, and human micro-expressions contain chaotic, non-repetitive entropy that AI-generated video still struggles to replicate. This “chaotic data” is exactly what AI needs to verify that content is grounded in reality. The Role of C2PA and Digital Transparency To institutionalize this verification, organizations like the Coalition for Content Provenance and Authenticity (C2PA) are developing standards to verify the origin of digital media. This coalition includes industry giants like Google, Adobe, Microsoft, and OpenAI. They are creating technical specifications that allow data to be cryptographically verifiable. Parallel to this is the Content Authenticity Initiative (CAI), led by Adobe, which drives the adoption of open-source tools for digital transparency. These organizations are moving beyond simple watermarking. They are enabling a system where brands can “sign” their videos the moment they begin recording. This creates a digital paper trail that AI models can prioritize over unverified, anonymous content. How Media Verification Works: From Lens to Screen The future of brand protection lies in a verified media pipeline. You may have already noticed a tiny “CR” mark in the corner of images or videos on platforms like LinkedIn. This label stands for “Content Credentials.” When you click or hover over this icon, you gain access to a sidebar that details the creator, the tools used to edit the media, and a clear disclosure of whether AI was used. This is not just about “shaming” AI content; it is about establishing authority. Google has already begun integrating C2PA signals into its search and advertising platforms to enforce policies against misrepresentation. By checking the metadata of an image or video, Google can determine if a file has been deceptively altered or if it holds a verified signature from a trusted brand. The Hardware

Uncategorized

Generative Engine Optimization: The Patterns Behind AI Visibility

The traditional search landscape, once defined by a list of ten blue links, is undergoing its most significant transformation since the invention of the crawler. As artificial intelligence integrates deeper into the way we find information, a new discipline has emerged: Generative Engine Optimization (GEO). This is the strategic practice of positioning a brand and its content so that AI platforms—such as Google AI Overviews, ChatGPT, Gemini, and Perplexity—cite, recommend, and mention you when answering user queries. For many digital marketers, the term might sound like another buzzword in an industry prone to jargon. However, the data suggests otherwise. Consider the success of Tally, a bootstrapped form builder. By optimizing for AI-driven discovery, the company found that ChatGPT became its number one source of referrals. This isn’t an isolated incident. Across the digital ecosystem, the shift toward AI discovery is measurable and massive. Currently, ChatGPT reaches over 800 million weekly users. Google’s Gemini app has surpassed 750 million monthly users. Perhaps most importantly for search professionals, Google’s AI Overviews are now appearing in roughly 16% of all searches, with even higher frequencies for product comparisons and high-intent queries. The transition from “Search” to “Answer Engines” is well underway. The question for brands is no longer whether AI is changing discovery, but whether their content is structured to be found in this new reality. Understanding the GEO Landscape While the potential of GEO is immense, it is currently characterized by significant volatility. Research conducted via the Semrush AI Visibility Index, which tracked 2,500 prompts across Google AI Mode and ChatGPT, revealed that between 40% and 60% of cited sources change month to month. This suggests that AI systems are constantly re-evaluating which sources are the most reliable, relevant, and extractable. Despite this fluctuation, clear patterns have emerged. The brands that appear consistently in AI responses share specific structural characteristics. They prioritize entity clarity, content extractability, and a robust multi-platform presence. These elements make it easier for large language models (LLMs) to find, trust, and reference their information. What GEO Looks Like in Practice To understand GEO, we must look at how AI synthesizes information. If a user asks, “What is the best whey protein powder for a woman in her 50s?” an AI engine does not just provide a list of websites. Instead, it evaluates multiple brands, cross-references ingredients, checks user reviews across the web, and generates a personalized recommendation. Success in GEO means your brand is the one the AI chooses to highlight in that summary. Achieving this requires a coordinated effort across four key areas: Content Strategy: Publishing data that AI systems can easily discover, interpret, and extract. Brand Presence: Establishing authority on third-party platforms where AI tools aggregate data, such as Reddit, YouTube, and niche industry forums. Technical Optimization: Ensuring that AI crawlers can access, render, and process your content without friction. Reputation Building: Cultivating positive mentions and associations that signal credibility to the AI’s trust layers. How GEO Differs from Traditional SEO GEO is not a replacement for traditional SEO; it is an evolution of it. Many of the fundamentals remain the same, but the objective has shifted. In traditional SEO, the goal is to rank as high as possible to earn a click. In GEO, the goal is to be the primary reference point in a generated answer. The Comparison: SEO vs. GEO To adapt your mental model, it helps to see how the two disciplines diverge across key metrics: Primary Goal: Traditional SEO seeks to rank in top positions. GEO seeks to be cited or mentioned in AI-generated answers. Success Metrics: SEO tracks rankings, clicks, and organic traffic. GEO tracks citations, brand mentions, and share of voice within AI responses. User Behavior: In SEO, users click through to your site. In GEO, the AI provides the value directly, often using your brand as the authoritative source. Key Platforms: SEO focuses on Google and Bing. GEO expands to ChatGPT, Perplexity, Claude, and Google AI Overviews. Optimization Focus: SEO emphasizes title tags, keyword density, and site speed. GEO emphasizes self-contained paragraphs, factual density, and structured data. While the surfaces are changing, the core principles of the “helpful content” movement remain vital. AI systems favor authoritative, well-structured content—the same qualities that Google has championed for years. The 5 Principles of AI Visibility: A Strategic Framework An effective GEO strategy is built on five core principles. These principles address how AI systems discover, evaluate, and ultimately decide to reference your brand. 1. SEO Fundamentals as the Foundation Traditional SEO fundamentals serve as the infrastructure for GEO. AI-generated answers are assembled from content that is accessible, readable, and attributable. If an AI crawler cannot consistently crawl or render your page, your content cannot be retrieved. Technical accessibility is a major factor here. JavaScript-heavy sites that struggle with client-side rendering often pose challenges for AI crawlers. Furthermore, the principles of E-E-A-T (Experience, Expertise, Authoritativeness, and Trust) are more important than ever. AI systems are designed to minimize “hallucinations” by grounding their answers in content that displays clear expertise and identifiable authorship. 2. Entity Clarity and AI Understanding AI systems don’t just read text; they attempt to understand the relationships between “entities”—people, places, brands, and things. Entity clarity involves distinguishing your brand from similar names and ensuring the AI knows exactly what category you occupy. For example, a company named “Monday.com” must provide enough context so that an AI distinguishes the project management software from the day of the week. This is achieved through consistent descriptions across your website, social profiles, and third-party directories. When your brand description on LinkedIn matches your description on Crunchbase and your “About” page, the AI gains confidence in your entity’s identity. Schema markup (JSON-LD) acts as the machine-readable translation of this clarity. By mirroring your page structure in a standardized format, you provide a clear map for the AI to follow, reducing the likelihood of miscategorization. 3. Content Extractability and Reusability This is perhaps the most tactical shift in GEO. AI engines use Retrieval-Augmented Generation

Uncategorized

Google previews WebMCP, a new protocol for AI agent interactions

The Evolution of the Web: From Content to Actions The digital landscape is undergoing its most significant transformation since the advent of the mobile web. For decades, the internet has functioned as a vast repository of information designed for human consumption. We search, we click, we read, and we manually interact with forms. However, the rise of large language models (LLMs) and autonomous AI agents is shifting this paradigm. We are entering the era of the “Agentic Web,” where software doesn’t just find information but executes complex tasks on our behalf. Google’s recent announcement of WebMCP (Model Context Protocol for the Web) marks a pivotal moment in this transition. This new protocol is designed to provide a standardized framework for how AI agents interact with websites. By moving away from fragile web scraping and toward structured, reliable “tool contracts,” Google is laying the groundwork for a future where AI can navigate the web with the same precision—and perhaps more speed—than a human user. What is WebMCP? Understanding the New Standard WebMCP is a technical protocol that allows a website to explicitly communicate its capabilities to an AI agent. In the current environment, if an AI agent wants to interact with a website—such as booking a flight or filing a support ticket—it must “read” the Document Object Model (DOM). It looks at the HTML, tries to identify which button is the “submit” button, and attempts to guess what various input fields require. This process is slow, computationally expensive, and highly prone to error whenever a website changes its design. WebMCP changes this by introducing a “Tool Contract.” Instead of the agent guessing, the website provides a structured manifest of available actions. These actions are exposed as “tools” that the AI can call directly. According to André Cipriani Bandarra, a lead at Google, the goal is to ensure agents can perform actions with “increased speed, reliability, and precision.” The Technical Foundation: navigator.modelContext At the heart of WebMCP lies a new browser API: navigator.modelContext. This API serves as the bridge between the website’s underlying code and the AI model running in the browser or on a remote server. Through this interface, a developer can define a structured list of functions. For example, a travel site might expose a tool called searchFlights(origin, destination, date). When an AI agent accesses the page, it queries navigator.modelContext to see what tools are available. Because the tools are defined with clear parameters and data types, the AI doesn’t have to navigate the UI in the traditional sense. It simply provides the necessary data to the tool, and the browser executes the action. This eliminates the “hallucination” risk where an agent might click the wrong button or enter data into the wrong field. The Declarative API Google has structured WebMCP to be accessible for different levels of web complexity. The Declarative API is designed for standard, straightforward interactions. It primarily focuses on HTML forms. By adding specific attributes to existing form elements, developers can tell an AI agent exactly what a form is for and what each field represents without writing extensive new code. This is the “low-code” path to making a website agent-ready. The Imperative API For more complex applications—such as Single Page Applications (SPAs) built on React, Vue, or Angular—the Imperative API provides the necessary depth. This API allows for dynamic interactions that require JavaScript execution. If a website has a complex checkout flow that involves real-time price updates or conditional logic, the Imperative API allows the developer to wrap these complex processes into a clean, callable tool for the agent. It ensures that even the most sophisticated web apps can be navigated by AI without friction. Why WebMCP is a Game Changer for SEO The SEO community has already begun to recognize the gravity of this announcement. Industry experts like Dan Petrovic have described WebMCP as the most significant shift in technical SEO since the introduction of structured data (Schema.org). For years, SEO has been about helping search engines understand what a page *is*. With WebMCP, SEO becomes about helping agents understand what a page can *do*. In the near future, ranking in a search engine may not just depend on your content’s relevance, but on your “agent-readiness.” If an AI agent (like a future version of Google Gemini or a specialized shopping agent) is tasked with finding the best price for a specific product and completing the purchase, it will naturally prefer websites that provide a clear, reliable WebMCP tool contract. Sites that require messy scraping will be sidelined in favor of those that offer a seamless, structured interface for the agent. Glenn Gabe, a prominent SEO consultant, has noted that this is a “big deal” for the industry. It moves the goalposts of optimization. Technical SEO will likely expand to include “Agent Optimization,” where developers and SEOs collaborate to ensure that their site’s tool contracts are comprehensive, error-free, and aligned with user intent. Practical Use Cases for the Agentic Web Google has highlighted several core industries where WebMCP will have an immediate and profound impact. These use cases illustrate how the protocol moves beyond simple automation to true agentic capability. Travel and Logistics Travel is one of the most complex tasks for a human to perform online. It involves filtering dozens of variables, comparing prices across multiple tabs, and navigating multi-step booking engines. With WebMCP, a user could simply say to their AI assistant, “Find me a direct flight to London under $800 for next Tuesday and book it using my saved card.” The agent would enter the airline’s site, query the WebMCP tools for searchFlights, receive structured data on available options, filter them instantly, and use a completeBooking tool to finish the transaction. Because the interaction happens via a structured protocol, the risk of the agent booking the wrong date or failing to select the “direct” option is virtually eliminated. Customer Support and Ticketing Filing a support ticket is often a tedious process of looking up order numbers, finding technical specifications, and

Uncategorized

Google shares what’s next in digital advertising and commerce in 2026

The Evolution of Digital Commerce: A New Era of AI Integration The digital landscape is undergoing its most significant transformation since the inception of the mobile internet. As we move into 2026, the traditional boundaries between searching, browsing, and buying are dissolving. In her third annual letter detailing the future of the industry, Vidhya Srinivasan, VP/GM of Ads & Commerce at Google, has outlined a vision where artificial intelligence is no longer a peripheral tool but the central nervous system of global commerce. This evolution is characterized by a shift toward experiences that are faster, more deeply personalized, and virtually frictionless for both the consumer and the advertiser. For years, the industry talked about “omnichannel” marketing as a goal. In 2026, that goal has been surpassed by “agentic commerce”—a system where AI agents assist in every step of the journey, from initial discovery on a creator’s video to the final click of a “buy” button within a search interface. This shift represents a fundamental change in how value is created and captured in the digital economy. From Influence to Impact: The New Creator Economy YouTube has long been recognized as a powerhouse for product discovery, but 2026 marks the year it becomes a primary commerce hub. The “creator to commerce” pipeline has been refined by sophisticated AI matching algorithms. Historically, brands struggled to find the perfect creator whose audience perfectly aligned with their product’s niche. Today, Google is leveraging AI to analyze vast amounts of engagement data, sentiment, and content themes to pair brands with creators who act as trusted tastemakers. This goes beyond simple product placement. AI now helps translate a creator’s influence into measurable business impact. By analyzing real-time conversion data and viewer behavior, the platform can optimize which products are shown to which viewers during a video, ensuring that the commerce experience feels like a helpful recommendation rather than a disruptive advertisement. This high-context environment allows creators to maintain their authenticity while providing brands with a direct path to high-intent shoppers. The Role of Authenticity in AI-Driven Marketing In an era where generative AI can create almost anything, the value of human connection has skyrocketed. YouTube creators remain the bridge between technology and the consumer. Google’s 2026 strategy emphasizes that while AI handles the backend matching and optimization, the “tastemaker” remains the human element. This synergy ensures that advertising remains grounded in community and trust, even as the delivery mechanisms become increasingly automated. The Evolution of Search Ads and the Rise of AI Mode Search is no longer a static list of blue links. With the widespread adoption of “AI Mode,” Google has reimagined search ads as an integrated part of a conversational discovery journey. Users in 2026 are increasingly utilizing multi-modal queries—combining voice, text, and images—to find exactly what they need. In response, Google has introduced new ad formats designed to meet users in these fluid moments. Sponsored Retail Listings and Direct Offers One of the most significant updates is the introduction of sponsored retail listings and “Direct Offers” within the AI Mode interface. When a user asks a complex question about a product category—for example, “What are the best sustainable running shoes for wide feet under $150?”—the AI doesn’t just provide a text summary. It integrates sponsored listings that match those exact criteria directly into the conversation. Direct Offers allow brands to present real-time, personalized incentives to users at the peak of their interest. These aren’t generic coupons; they are dynamic offers generated based on the user’s journey, designed to convert interest into a sale instantly. This reduces the steps in the funnel, moving a customer from “curiosity” to “checkout” in a single interaction. Agentic Commerce and the Universal Commerce Protocol (UCP) Perhaps the most revolutionary development for 2026 is the arrival of agentic commerce, powered by the Universal Commerce Protocol (UCP). For decades, the primary friction in online shopping has been the “handoff”—the moment a user leaves a search engine or social media site to navigate a merchant’s specific checkout process. UCP aims to standardize the commerce experience across the web. By using UCP, Google enables consumers to browse, pay, and complete purchases seamlessly without ever leaving the AI Mode environment. This protocol provides a secure, standardized framework for sharing payment and shipping information between the platform and the merchant, provided the user has given consent. This means that an AI agent can effectively “act” on behalf of the user to finalize a transaction. Early Adopters and Global Scaling The rollout of UCP and agentic commerce features has already begun with major partners. Early participants like Etsy and Wayfair have seen significant reductions in cart abandonment rates because the friction of account creation and manual data entry has been eliminated. Google has confirmed that the next phase of this rollout will include retail giants such as Shopify, Target, and Walmart. As these platforms integrate with UCP, the “buy it now” capability will become a ubiquitous feature of the digital experience, regardless of where the initial discovery happened. Transforming Creative Production with Gemini 3 and Veo 3 The demand for high-quality, personalized creative content has never been higher, but the cost and time required for traditional production have often been barriers for smaller brands. Google is addressing this with its latest suite of generative AI tools, powered by the Gemini 3 model. These tools allow advertisers to create studio-quality assets in a matter of minutes, not weeks. Nano Banana and Veo 3: The Future of Visual Assets Two specific tools are leading this charge: Nano Banana and Veo 3. Nano Banana focuses on rapid, high-fidelity image and graphic generation, allowing marketers to produce thousands of variations of an ad to see which resonates best with different audience segments. Veo 3 takes this a step further by offering advanced image-to-video capabilities. Advertisers can take a simple product photo and transform it into a cinematic, high-definition video ad suitable for YouTube or social media. This democratization of creative production means that “performance” is no longer just

Scroll to Top