GEO myths: This article may contain lies

The Historical Precedent for Skepticism in Digital Strategy

In the high-speed world of digital marketing, where acronyms proliferate overnight, and best practices shift quarterly, few things are as dangerous as accepting guidance without rigorous scrutiny. This necessity for skepticism is not unique to our era. We need only look back less than two centuries to see how resistance to verified data can cost dearly.

Consider the medical community in the 1840s. Scientists who championed the radical idea that simple hand washing could save lives were often met with ridicule and dismissal. While the correlation between improved hygiene and reduced death rates in hospitals was empirically shown, the underlying mechanism—the germ theory—was yet to be fully established. Because the comprehensive explanation was missing, the adoption of basic sanitary practices stalled for decades, leading to countless preventable fatalities.

History serves as a powerful reminder: what is laughed at today might become the truth of tomorrow, and conversely, following misleading or unproven guidance—even if delivered with confidence—can lead practitioners down fruitless, expensive paths. While adopting poor Generative Engine Optimization (GEO) advice will not result in a literal health crisis, it certainly presents an economic risk. The consequences can include wasted budget, lost market share, and professional stagnation, which, in the volatile digital ecosystem, constitutes a form of “economic death.”

Generative Engine Optimization (GEO) is the emergent field focusing on ensuring content is discoverable and cited by Large Language Models (LLMs) and AI chatbots within search experiences. As this field rapidly matures, it is breeding an environment ripe for speculation masquerading as science. Drawing inspiration from the dangers of unscientific research in traditional SEO, this article provides a crucial framework for evaluating claims in the GEO landscape. We will highlight the psychological traps that make us susceptible to bad advice and apply a powerful critical lens to three of the most pervasive myths currently influencing AI search optimization strategies.

For those navigating the time constraints of this new digital frontier, here is a concise overview of the core takeaways:

  • We often accept flawed GEO and SEO recommendations due to cognitive biases, lack of knowledge (ignorance/amathia), and a propensity for black-and-white thinking.
  • The “Ladder of Misinference” provides a structured tool—ranging from statement to proof—to critically assess the credibility of any advice.
  • To enhance your knowledge, actively seek out dissenting viewpoints, engage in active consumption, delay belief, and maintain caution regarding AI-generated summaries.
  • Currently, there is no validated need for an `llms.txt` file to boost AI citations.
  • You should continue to leverage schema markup due to its fundamental SEO benefits, even if AI chatbots do not demonstrably use it today for grounding.
  • Maintaining genuine content freshness is critical, particularly for time-sensitive queries, as evidence suggests this impacts AI citation rates.
  • Before diving into the specifics of these optimization myths, we must first understand why our industry is so vulnerable to accepting unproven concepts.

    The Psychological Roots: Why Bad GEO and SEO Advice Takes Hold

    The digital marketing industry, particularly its bleeding edge focused on new AI search interfaces, is characterized by rapid change and a high degree of opacity regarding algorithmic function. This uncertainty creates fertile ground for misinformation. The fundamental reasons we fall for misleading guidance are complex, rooted in human psychology and cognitive shortcuts.

    Ignorance, Stupidity, and Amathia

    The first hurdle is knowledge itself. We are inherently ignorant because the field is new; we simply do not know better *yet*. Stupidity, in this context, is the inability to know better, a neutral descriptor of a current limitation. The most dangerous state, however, is what the ancient Greeks termed *amathia*—voluntary stupidity. This is the refusal to learn or seek out better information. When marketers stubbornly cling to outdated or debunked theories, dismissing new data out of hand, they suffer from amathia. Overcoming this requires humility and a proactive commitment to ongoing education.

    The Pervasiveness of Cognitive Biases

    We are all prone to cognitive biases, which are mental shortcuts designed to simplify decision-making. In the context of consuming research and articles, confirmation bias is perhaps the most destructive force. Confirmation bias dictates that we preferentially seek out, interpret, and recall information that confirms our pre-existing beliefs or hypotheses. If a marketer already believes that blocking AI bots is detrimental, they will rigorously seek out flaws in any research suggesting the opposite, while blindly accepting any anecdotal evidence that supports their position. This bias prevents objective analysis and critical thought necessary for sound GEO strategy.

    The Pitfalls of Black-and-White Thinking

    The digital sphere often defaults to simplistic, binary conclusions: either a strategy works, or it doesn’t. This black-and-white thinking fails to account for the necessary nuance in search algorithms and user behavior. Concepts are rarely absolute; they exist on a spectrum.

    As author Alex Edmans highlights in his work, the world often consists of shades of gray, which can be categorized as:

    • Moderate: A factor’s impact diminishes after a certain threshold. For example, backlinks are crucial, but their marginal value decreases once a site hits a high domain authority. The effect is moderate.
    • Granular: A strategy works only under specific conditions. For instance, relying on community content platforms like Reddit for AI citations is granular; it’s only relevant if those platforms are consistently cited for a specific set of prompts related to the query.
    • Marbled: A recommendation is highly contextual and depends entirely on the business model. Blocking certain AI bots isn’t universally stupid; for some proprietary data models or specific companies, it may make perfect financial sense.

    The path to becoming a more effective digital strategist begins with the awareness that we are all susceptible to these traps. We must actively seek frameworks that force us out of heuristic shortcuts and into rigorous analysis.

    The Ladder of Misinference: A Framework for Critical Evaluation

    To shield ourselves from misinformation and the endless cycle of speculation that characterizes nascent fields like GEO, we must adopt a structured method for evaluating claims. We can borrow the “Ladder of Misinference,” which outlines the rigorous climb a claim must make to be accepted as proven fact:

    The ladder of misinference

    A claim starts as a statement and must ascend through several critical steps to achieve the status of irrefutable proof:

    1. Statement: An assertion or theory (“This new file format will improve AI ranking”).
    2. Fact: A piece of information known to be true (The file is being crawled by Google).
    3. Data: Observable measurements collected from experiments or observation.
    4. Evidence: Data that demonstrates a causal link or strong correlation, suggesting why a phenomenon occurs.
    5. Proof: Irrefutable verification, often provided by official documentation, patents, or legal findings.

    Most GEO advice today struggles to move past the data stage, relying heavily on correlation studies or small, non-replicable experiments.

    Consider the long-standing SEO debate over user signals:

    • Statement: “User behavior (like click-through rate) is an important factor for better organic search performance.” (Many years ago, this was widely dismissed.)
    • Fact: Improved CTR can often precede better search rankings.
    • Data: Many site owners measured better organic performance following A/B tests that improved user engagement, but the mechanism remained unclear.
    • Evidence: Experiments demonstrated causal effects (e.g., increased CTR leading directly to short-term ranking bumps). More recently, leaks confirmed that portions of Google’s 2024 ranking documentation specifically focused on evaluating user signals.
    • Proof: Official, public confirmation—specifically, court documents from Google’s DOJ monopoly trial—verified the centrality of user signals in search ranking algorithms, moving the claim to universal truth.

    The journey from a speculative statement to verifiable proof took many years, confirming the early, laughed-at “jokes” made by figures like Rand Fishkin and Marcus Tandler.

    Practical Steps for Knowledge Acquisition

    Beyond the ladder, we can implement four personal practices to improve our critical consumption of digital advice:

    • Seek Dissenting Viewpoints: True understanding is achieved when you can articulate and successfully argue the opposing view—a concept known as “steelmanning.” By fully grasping the merits of rival theories, you fortify your own position against inevitable scrutiny.
    • Consume with the Intent to Understand: In professional discussions, we often “listen to reply,” meaning we are already formulating our rebuttal instead of actively absorbing the information presented. Shifting to active listening and consumption ensures we genuinely process the content, including its nuances and caveats.
    • Pause Before You Share and Believe: False information spreads exponentially faster than factual corrections. Before amplifying any GEO claim, regardless of the authority of the source, take a moment to locate the underlying data or evidence. The reputation of the messenger does not guarantee the accuracy of the message.
    • Rely Less on AI for Summarization: While tempting, using LLMs to summarize complex research papers introduces significant risks. Studies have shown that prompts asking for brief summaries often increase the likelihood of “hallucinations”—inaccurate or fabricated details. This AI-generated content can lack the substance required to advance a task meaningfully, a phenomenon dubbed “AI workslop.” For strategic decision-making, there is no substitute for reading the full source material.

    The Prime Example: Blinding AI Workslop

    The need for critical consumption is best illustrated by a phenomenon observed in recent months: comprehensive-looking research reports that crumble under objective scrutiny. These documents often masquerade as authoritative analyses of “how AI search really works,” boasting impressive metrics like “weeks of time investment,” analysis of “19 studies and six case studies,” and promises of being “validated, reviewed, and stress-tested.”

    As noted by Edmans, it is not for the authors to declare their findings groundbreaking; that judgment rests with the reader. When reports aggressively promote the conclusiveness of their proof or the novelty of their results, it often suggests that the findings are not strong enough to speak for themselves. In the GEO context, we must be wary of “AI workslop”: content generated—or analyzed through heuristic shortcuts—that *looks* like good work but fails to provide meaningful substance.

    In one prominent example of this “workslop,” scrutiny revealed several glaring issues:

    • The report failed to deliver on its core claim, confusing false correlations between studies that measured disparate phenomena.
    • Reported sample sizes were inaccurate or inconsistent with the cited original sources.
    • Cited research was often misdated (e.g., a GEO study published in 2023 was claimed to be 2024).
    • The analysis claimed that certain optimization features, like schema markup and FAQ blocks, were “confirmed” to significantly improve AI inclusion, yet a review of the cited study showed it made no such causal claims.
    • A crucial metric used was mislabeled as a “correlation coefficient” when it was, in reality, a proprietary weighted score.

    This episode serves as a powerful reminder: quantity of data is no substitute for quality of analysis. If an analysis appears highly convincing on the surface but contains factual inaccuracies upon inspection, it should be disregarded entirely.

    Debunking the Top Three GEO Myths: Claims vs. Reality

    With a clear critical framework established, we can now address three common recommendations widely circulated for improving AI citation rates, separating evidence from speculation.

    Myth 1: The Mandate for `llms.txt`

    The Claim: Implementing an `llms.txt` file—analogous to `robots.txt`—provides AI chatbots with a lightweight, centralized source of important information, making it easier for AI crawlers to efficiently evaluate the domain and thus boost citation rates.

    Reality Check (Ladder of Misinference): The idea of `llms.txt` is currently a statement. Parts of it are factual (e.g., Google and others *are* crawling and indexing these file paths). However, there is zero data, evidence, or proof demonstrating that the presence of an `llms.txt` file positively influences AI inclusion or citation performance.

    The `llms.txt` concept gained traction largely through amplified influencer chatter and repetitive black-and-white debates. The original proposal, dating back to 2024, also included suggestions for serving a clean Markdown version (`.md` appended) of every content page. This proposal, if adopted, would lead to significant unintended negative consequences, including internal competition between URLs, unnecessary duplicate content issues, and a substantial, unwarranted increase in total crawl volume for search engines.

    The only genuinely compelling scenario for this file is in highly specialized cases where a business operates a complex API that specific AI agents could benefit from referencing, far beyond the needs of standard digital publishers.

    Recommendation:

    • Monitor the situation, but take no immediate action.
    • On a quarterly basis, check whether major entities (OpenAI, Anthropic, Google) have officially announced support and provided detailed documentation.
    • Review your log files to track how often AI crawlers access the theoretical `llms.txt` path (which you can do even if the file is not present).

    Until official evidence or proof emerges, dedicating resources to an `llms.txt` file is an optimization distraction.

    Myth 2: Schema Markup is the Key to AI Citation

    The Claim: Machines thrive on structured data, so making content as explicit and easy as possible through schema markup (e.g., FAQ schema, HowTo schema) guarantees or significantly increases AI citation rates. This claim is often bolstered by unattributed quotes suggesting major search providers, like Microsoft, have confirmed the use of schema for LLMs.

    Reality Check (Ladder of Misinference): This claim relies heavily on correlational data and misleading statements. Correlation studies often show that sites with robust schema also have better AI visibility, but this is explained by a significant rival theory: sites that implement good schema generally practice high-quality, hygiene-factor SEO, which leads to better search rankings—and better search rankings (which are often input into the AI grounding process) are the true driver of visibility, not the schema itself.

    Furthermore, we must distinguish between LLM training and grounding:

    • Training: During the initial training phase, HTML elements are typically stripped, and the text is tokenized. While LLMs learn to recognize and write structured data, the specific markup on your individual page does not necessarily influence the foundational model’s core knowledge base.
    • Grounding: For real-time response generation (grounding), there is currently no verifiable evidence that AI chatbots access schema markup directly. Recent, controlled experiments have shown that even when analyzing an open DOM (Document Object Model), AI tools often fail to correctly reference existing schema or may even hallucinate markup that does not exist on the page.

    It is important to remember that all schema is structured data, but not all structured data is schema. LLMs are excellent at reading well-formatted HTML elements like tables and lists, which may be mistaken for the benefits of schema markup.

    Recommendation:

    • Use schema markup for all established rich results (e.g., Review snippets, Recipe, Article, FAQ, etc.).
    • Continue to use all relevant properties within your schema markup, ensuring its validity and accuracy.

    Schema markup is a foundational element of sound SEO hygiene. While it may not currently serve as a direct citation booster for AI, its benefits for organic ranking signals and general site quality are undeniable. It is a necessary investment, regardless of the fluctuating state of AI agent adoption.

    Myth 3: The Imperative for Fresh Content

    The Claim: AI chatbots prefer fresh content and newer sources because they provide greater accuracy, particularly for queries where timeliness is a factor.

    Reality Check (Ladder of Misinference): Compared to the two preceding myths, this recommendation stands on a much more solid base of evidence and data.

    The core issue is that many foundational LLMs have knowledge cut-off dates, often stopping at the end of 2022. When users ask questions requiring knowledge of current events, prices, or recent developments, the model must rely on “grounding” via real-time web search. If freshness is deemed relevant to the query, the system actively seeks recent sources.

    Research from multiple reputable organizations (including Ahrefs, Generative Pulse, and Seer Interactive) has observed a positive correlation between content recency and AI citation frequency for relevant prompts. Furthermore, a recent scientific paper provided additional support, though with caveats (the researchers used API results, which differ from the live user interface, and the date injection methods were highly artificial).

    Nevertheless, the consensus is that for time-sensitive queries, freshness is a strong signal that determines whether the AI system initiates a web search and, consequently, which sources it cites.

    Recommendation:

    • Maintain genuine content relevance and accuracy through regular updates, particularly for pages targeting queries where recency is a known factor (e.g., “best of,” news, tutorials involving evolving technology).
    • Ensure date consistency across all relevant data points: the visible on-page updated date, the schema markup date (dateModified), and the sitemap `lastmod` date.
    • Avoid artificially updating content solely by changing the date stamp. Search engines maintain up to 20 past versions of a webpage and can detect manipulative date manipulation, which may negatively impact quality evaluation.

    Escaping the Vortex of AI Search Misinformation

    The digital publishing ecosystem faces a unique challenge in the age of generative AI. Before LLMs, the volume of content was already overwhelming; now, it is exploding. This has led to a compression culture where we rely on the same tools that generate the content—AI summarization tools—to consume and analyze it.

    This reliance risks creating a dangerous vortex of misinformation. Flawed GEO research, produced quickly and amplified through cognitive biases, feeds back into the training data of the very AI chatbots we are trying to optimize for. We are already observing this feedback loop, where LLMs sometimes answer GEO questions based on their ingested model knowledge, repeating the speculative advice they found online.

    To prevent the shoveling of digital “asbestos” into our industry—misinformation that must eventually be painstakingly removed—we must cultivate intellectual independence. An attention-grabbing, definitive headline should always serve as a red flag, prompting critical inquiry rather than immediate acceptance. Take the time to understand the underlying mechanics of *why* something is supposed to work. Get your hands dirty with real log file analysis and controlled experimentation.

    Never take any claim at face value, regardless of the authority or visibility of the individual promoting it. Authority is not, and never will be, accuracy.

    P.S. This article may contain lies.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top