How AI-generated content performs in Google Search: A 16-month experiment

How AI-generated content performs in Google Search: A 16-month experiment

The rise of Generative AI has fundamentally changed the landscape of content marketing and Search Engine Optimization (SEO). Today, a single person can generate hundreds of high-quality-looking articles in a matter of hours, a task that once took a team of writers months to accomplish. However, the ease of production has led to a critical question for digital publishers: does this content actually provide long-term value in the eyes of Google?

To answer this, a comprehensive 16-month experiment was conducted in collaboration with the research team at SE Ranking. The goal was to move beyond anecdotal evidence and track the performance of raw, unedited AI content on brand-new domains with zero existing authority. The findings suggest that while AI can provide a quick burst of visibility, the road to long-term search success is far more complex than simply hitting “generate.”

The Methodology: Setting the Stage for the Experiment

The core objective of this study was to observe the natural lifecycle of AI-generated content without the interference of human optimization. Many SEO experts argue that AI content only works when heavily edited or paired with a strong backlink profile. This experiment stripped away those variables to see how the content performed on its own merit.

The team purchased 20 brand-new domains, ensuring there was no previous search history, brand recognition, or existing backlink profile that could skew the results. Each domain was dedicated to a specific niche to provide a broad look at how Google handles different topics. The niches included:

  • Arts & Entertainment
  • Business & Services
  • Community & Society
  • Computers & Technology
  • Ecommerce & Shopping
  • Finance & Accounting
  • Food & Drink
  • Games & Accessories
  • Health & Medicine
  • Industry & Engineering
  • Hobbies & Interests
  • Home & Garden
  • Jobs & Career
  • Law & Government
  • Lifestyle & Well-being
  • Pets & Animals
  • Science & Education
  • Sports & Fitness
  • Travel & Tourism
  • Vehicles & Boats

For each of these 20 niches, the researchers identified 100 informational “how-to” keywords. These were specifically chosen as long-tail terms with lower competition, which typically offer the easiest path to ranking for new websites. In total, 2,000 AI-generated articles were published across the network of sites. No human editing, rewriting, or enhancement was performed. Once published, the sites were added to Google Search Console, sitemaps were submitted, and the pages were left untouched to observe their organic performance over 16 months.

Early Success: The Indexing and Visibility Phase

The initial results were surprisingly positive, leading some to believe that “AI spam” might actually be a viable strategy. Within the first 36 days, Google showed a high willingness to crawl and index the new content. Approximately 71% of the 2,000 pages (1,419 articles) were indexed within just over a month. For brand-new domains with zero authority, this is a remarkably high success rate.

During this first month, the collective network of sites generated 122,102 impressions and 244 clicks. More impressively, 80% of the sites were already ranking for at least 100 keywords. Some niches saw explosive early interest. The “Hobbies & Interests” domain led the pack with over 17,000 impressions, followed closely by “Business & Services” and “Travel & Tourism.”

This early performance indicates that Google’s initial assessment of content is often based on relevance and basic SEO structure. Because the AI-generated content followed a logical “how-to” format and targeted low-competition keywords, Google initially gave it a chance to compete in the Search Engine Results Pages (SERPs).

Months 2–3: The Growth Peak

As the experiment moved into its second and third months, the momentum continued to build. Cumulative impressions across the 20 sites rose from 122,102 to 526,624, and clicks increased from 244 to 782. By the ten-week mark, 12 of the 20 sites were ranking for more than 1,000 keywords each.

This phase is often what lures many digital publishers into a false sense of security. It appears that the strategy is working: the content is indexed, rankings are climbing, and traffic is trickling in. During this window, Google is essentially “testing” the content. It places the pages in front of users to see how they interact with the information. However, this growth proved to be the peak rather than the beginning of a steady climb.

The Great Ranking Collapse: Months 3–6

The turning point for the experiment arrived around early February 2025, approximately three months after the initial publication. The visibility that had been building steadily began to evaporate. By the six-month mark, the results were staggering: only 3% of the pages remained in the top 100 search results, down from 28% in the first month.

While the total number of impressions across the 16-month period reached over 700,000 by month six, a closer look at the data revealed a troubling trend. Roughly 75% of all total impressions and clicks were generated in the first 2.5 months. The subsequent 3.5 months saw a sharp decline in growth, with the sites adding very little to their totals. Google had effectively decided that the vast majority of this content did not deserve a place on the first few pages of search results.

The pages remained indexed, meaning Google still knew they existed, but they were essentially “buried.” Without the authority of backlinks or the unique value of human expertise, the AI-generated content could not maintain its position against more established or higher-quality competitors.

Long-Term Stagnation and the Impact of Spam Updates

The experiment was allowed to run for a total of 16 months to see if the sites would eventually recover or if Google’s algorithms would re-evaluate the content. For over a year, visibility remained extremely low across almost all niches. There was no “bounce back” for the majority of the AI articles.

However, an interesting fluctuation occurred during the rollout of the Google August 2025 spam update. During this period, 50% of the sites saw a brief two-week spike in impressions. Following the completion of the update, the percentage of pages ranking in the top 100 rose to 20%—a significant improvement from the 3% seen at the six-month mark, though still lower than the initial 28% seen during the first month.

Despite this slight recovery in ranking positions, the actual traffic (clicks) remained negligible. This suggests that while Google’s updates may occasionally reshuffle how AI content is viewed, it rarely leads to a meaningful recovery for sites that lack underlying authority and quality signals.

YMYL Niches: The Strictest Standards

One of the most telling findings of the 16-month experiment was how differently Google treated various niches. Specifically, domains focusing on “Your Money or Your Life” (YMYL) topics—such as Finance, Health, and Law—performed the worst in terms of long-term indexation and visibility.

  • The Finance domain saw only 9 out of 100 pages remain indexed.
  • The Health domain saw only 14 out of 100 pages remain indexed.

Google applies much stricter standards to content that can impact a person’s financial stability, health, or safety. Because the AI-generated articles lacked verifiable expertise, author credentials, or unique insights, they were quickly flagged as low-quality or untrustworthy by the search engine’s ranking systems.

Why Pure AI Content Fails to Sustain Rankings

Analyzing the 16-month experiment reveals several clear reasons why the “publish at scale” approach with raw AI content fails to produce lasting SEO results. Google’s algorithms are designed to reward content that demonstrates E-E-A-T: Experience, Expertise, Authoritativeness, and Trustworthiness. Purely AI-generated content struggles in several of these areas.

1. Lack of Authority and Backlinks

New domains start with a “Domain Authority” of zero. In competitive search environments, backlinks act as votes of confidence. Since this experiment relied solely on content without any promotional or link-building efforts, the sites had no external validation. Without authority, Google is less likely to trust a site’s information over the long term.

2. Absence of Content Differentiation

AI models are trained on existing data. Consequently, they tend to produce content that is an “average” of what is already on the web. This leads to a lack of differentiation. If an article doesn’t offer a new perspective, unique data, or a better user experience than what already exists, Google has no reason to rank it above established pages.

3. Poor Site Structure and Internal Linking

The experiment involved publishing 100 articles and leaving them untouched. In a standard SEO strategy, internal linking helps Google understand the relationship between pages and establishes “topical authority.” Without a clear hierarchy or internal links, the sites felt like disconnected collections of articles rather than a cohesive resource.

4. E-E-A-T Signals

Google looks for signals that a real person with real experience is behind the content. Raw AI content often lacks a unique “voice” or anecdotal evidence that proves expertise. In niches like Law or Medicine, the absence of a qualified author profile is an immediate red flag for search algorithms.

Bonus Insight: The “Freshness” Halo Effect

In the final stages of the study (March 2026), a follow-up experiment was conducted to see how new AI content would affect the existing, stagnant sites. The researchers added fresh AI-generated articles to eight of the original domains.

The results were unexpected. While the new articles themselves weren’t always indexed immediately, the act of publishing new content caused a massive spike in impressions for the *older* pages on the site. For example:

  • The Business-focused site saw a 17x increase in impressions (from 458 to 7,750).
  • The Law-focused site saw a 19x increase (from 19 to 356).
  • The Science-focused site also saw a 19x increase (from 34 to 633).

This suggests that Google rewards “site freshness.” When a site begins publishing again, the algorithm may re-crawl the entire domain and give the older content a second look. However, this appears to be a temporary boost rather than a permanent fix for low-quality content.

Conclusion: The Future of AI in SEO

The 16-month experiment serves as a cautionary tale for those looking for a shortcut to SEO success. While AI can certainly speed up the drafting process and help with keyword research, it cannot replace a comprehensive SEO strategy. The initial “honeymoon phase” of high impressions and indexing is often a mirage that disappears once Google’s quality filters and testing phases are complete.

For modern publishers, the takeaway is clear: AI is a powerful tool for productivity, but human-in-the-loop editing is essential. To rank and stay ranked, content must provide unique value, demonstrate genuine expertise, and be supported by a broader strategy of link-building and site authority. In the world of search, quality and trust remain the ultimate currency.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top