The AI writing tics that hurt engagement: A study

The rise of generative AI has transformed the landscape of digital publishing, but it has also birthed a new era of “vibe-based” editorial criticism. If you spend any time on social media platforms like LinkedIn or X, you have likely seen content marketers and SEOs confidently pointing out the “dead giveaways” of AI-generated text. From the over-reliance on em dashes to the predictable “In today’s fast-paced world” introductions, the consensus seems to be that readers hate AI writing because it feels robotic, repetitive, and uninspired.

However, much of this discourse relies on subjective taste rather than hard data. While a seasoned editor might cringe at a specific linguistic pattern, the real question for digital publishers is whether these patterns actually impact performance. Does a reader truly bounce because they saw an em dash, or is the industry over-correcting based on personal pet peeves? To move past the guesswork, a comprehensive study was conducted to analyze which AI writing “tics” actually hurt user engagement and which ones are being unfairly maligned.

The Methodology: Quantifying the AI Linguistic Footprint

To understand the relationship between stylistic patterns and reader behavior, the study gathered an extensive dataset. Analyzing content purely on a “feel” basis isn’t enough; the research required a standardized approach to identify correlations between specific phrases and Google Analytics 4 (GA4) metrics.

The research was built upon the following dataset parameters:

  • 10 Diverse Domains: The study covered a wide spectrum of industries, including technology, e-commerce, healthcare, education, and analytics. This ensured that the findings weren’t limited to a single niche or audience type.
  • Over 1,000 URLs: The URLs represented a mix of content workflows, ranging from fully human-written articles to hybrid human-AI collaborations and completely AI-generated posts.
  • Minimum Word Count: Any page under 500 words was excluded. Very short posts do not provide enough linguistic real estate for stylistic patterns to emerge reliably, and their engagement metrics are often skewed by quick-answer search intent.

To ensure a fair comparison, the researchers standardized the data by measuring “tics per 1,000 words.” Without this normalization, a 4,000-word deep dive would look significantly “worse” than a 600-word blog post simply because it contains more sentences. The primary metric for success was the engagement rate. In the world of GA4, an “engaged session” is defined as a visit that lasts 10 seconds or longer, includes a conversion event, or involves at least two page views. While 10 seconds sounds brief, it is the critical window where a reader decides if the content is worth their time or if it’s just another piece of generic digital filler.

The Shakespeare Curveball: Why Some “AI Tells” Are Actually Human

Before diving into the engagement data, the study uncovered a fascinating paradox. Many of the linguistic patterns we associate with Large Language Models (LLMs) are deeply rooted in high-quality human prose. To test the validity of AI “tic” counters, the researchers ran the analysis against two control samples that were guaranteed to be 100% human-written.

The first control was a novel published in 2021, written before the widespread availability of tools like ChatGPT. This text scored 6.9 tics per 1,000 words—a score that would trigger many modern AI detectors. Even more surprising was the second control: William Shakespeare’s *Hamlet*. The play scored approximately 11.4 tics per 1,000 words, making the Bard of Avon more “AI-coded” than many modern AI-generated blog posts.

This anomaly was largely driven by the em dash. Shakespeare and literary novelists use complex sentence structures that rely on punctuation to manage parenthetical thoughts. Because AI is trained on vast troves of human literature and professional writing, it mimics these structures. This suggests that some of the features we call “AI tics” are actually just hallmarks of formal or complex English. Distinguishing between “bad AI writing” and “sophisticated human writing” requires looking at which specific patterns actually drive users away.

The Tics That Kill Engagement: What the Data Reveals

The study found that most AI tells have a negligible impact on performance. In statistics, a correlation of less than plus or minus 0.1 is generally considered insignificant. However, a few specific habits showed a clear negative relationship with how long readers stayed on the page.

The “Conclusion” Header Kiss of Death

The single strongest negative correlation in the entire dataset was the use of the word “Conclusion” as a section header. This tic had a negative correlation of approximately -0.118 with engagement rate. When readers see a header that explicitly says “Conclusion,” they often perceive it as a signal that the value has ended. They scroll past the final paragraphs to find a call to action or simply exit the page immediately.

In AI-generated content, LLMs have a habitual need to wrap things up neatly with a summary. These sections often fail to add new information, instead simply restating what was said in the previous 800 words. Readers are savvy; they recognize the “throat-clearing” nature of these sections and bounce before the session can be counted as engaged.

The Overuse of “Not Only… But Also”

Another significant performance killer was the repetitive use of “Not only [X], but also [Y]” constructions. While this is a grammatically correct way to add emphasis, LLMs tend to use it as a default sentence structure to sound authoritative. The study found that frequent use of this construction correlated with higher bounce rates.

In one extreme example found during the study, a single blog post used this phrase 12 times. This level of repetition creates a rhythmic monotony that makes the reader’s eyes glaze over. It signals a lack of original thought and suggests that the content is merely shuffling keywords around rather than providing nuanced insights.

Introductory Filler and “The Fast-Paced Landscape”

Phrases like “In today’s fast-paced digital landscape,” “Let’s take a look,” or “In this article, we will explore” were also flagged as engagement drains. These are known as “transitional filler.” They take up space without providing immediate value. In a world where users scan content to find answers quickly, these 15-to-20-word introductory stretches act as hurdles. If the first two sentences of a post are generic AI platitudes, the reader assumes the rest of the post will be equally hollow.

The Surprising Success of the Em Dash

If you listen to editorial purists, the em dash is the ultimate mark of the “lazy AI writer.” However, the data told a very different story. Despite being the most common “tic” in the dataset, em dashes actually showed a slight positive correlation with engagement rate.

Why would a “tell” that people claim to hate actually keep them on the page? The researchers suggest that the em dash is often a proxy for sentence complexity and depth. Writers—and AI prompts—that utilize em dashes are often attempting to explain nuances, provide context, or add parenthetical detail. These types of sentences are more common in long-form, thoughtful content that provides actual value to the reader. Short, choppy, “human-sounding” sentences can sometimes feel overly simplistic or thin. The em dash, despite its reputation, seems to align with content that readers find substantive enough to stay and read.

3 Practical Strategies for Content Teams

Based on the findings of this study, digital publishers and SEO teams should shift their focus away from “AI detection” and toward “value preservation.” Here is how you can apply these insights to your editorial workflow.

1. Stop Over-Optimizing for AI Detectors

The fact that *Hamlet* would fail an AI detection test should be a wake-up call for the industry. Google has repeatedly stated that its ranking systems reward high-quality, helpful content, regardless of how it was produced. If you spend your time stripping out em dashes or rewriting every “this” and “that” just to satisfy a third-party detection tool, you may be stripping the nuance out of your writing. Instead of worrying about whether a sentence looks like an LLM wrote it, ask if the sentence is clear, accurate, and helpful.

2. Revolutionize Your Conclusions

Since “Conclusion” headers were the strongest negative signal, it is time to rethink how we end articles. Instead of a generic summary, try these alternatives:

  • The “Next Steps” Header: Give the reader actionable advice on what to do with the information they just consumed.
  • The “Final Verdict” Header: Offer a strong opinion or a definitive takeaway that hasn’t been mentioned yet.
  • The “Practical Checklist” Header: Summarize the post as a functional tool rather than a prose recap.

By avoiding the word “Conclusion,” you prevent the psychological trigger that tells the reader it’s time to leave.

3. Edit for Rhythmic Variety

The problem with “Not only… but also” isn’t the phrase itself, but its repetition. AI tends to find a “groove” and stay in it. When auditing content—whether human or AI—editors should look for “sentence-start” fatigue. If four sentences in a row start with “This” or “Then,” the engagement rate will likely suffer. Variety in sentence length and structure is the hallmark of engaging writing. Force your AI prompts (or your writers) to use varied transitions to keep the reader’s brain active.

Focusing on the Forest, Not the Trees

The most important takeaway from the study is that most of the “AI tics” we obsess over don’t actually matter to the average reader. Users do not leave a page because an author used an em dash; they leave because the content failed to solve their problem or failed to present information in an interesting way.

In the evolving information marketplace, the winners won’t be those who hide their use of AI most effectively. The winners will be the brands that use AI to scale the production of genuinely useful, data-backed, and well-structured content. We should be careful about turning stylistic “hot takes” into editorial law. If the data shows that em dashes don’t hurt and specific headers do, our editorial guidelines should reflect that reality, not the subjective opinions of a loud minority on social media.

Write for the reader, prioritize clarity, and don’t let the fear of “looking like AI” prevent you from publishing high-quality, nuanced work. As this study proves, even Shakespeare might have been accused of using ChatGPT if he were writing blog posts in 2026.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top