The landscape of software development and digital marketing has undergone a seismic shift with the advent of Large Language Models (LLMs). We use these tools daily—often heavily—to bridge the gap between human intent and machine execution. In fact, research indicates that tech professionals utilize LLMs at twice the rate of the general population, with many spending more than a full day each week interacting with AI interfaces. However, even the most seasoned tech workers face a recurring frustration: the “drift” that happens when an LLM stops following instructions and starts hallucinating or losing the thread of the project. This is the central challenge of “vibe-coding.”
Vibe-coding is the process of building software by describing what you want in natural language, letting the AI generate the code, and then iteratively refining the output until it matches your “vibe”—or your specific functional intent. While it sounds simple, building a complex SEO tool without losing control of your LLM requires more than just good prompts; it requires a structured environment, an understanding of context windows, and a rigorous troubleshooting methodology.
Choose your vibe-coding environment
The era of copy-pasting snippets from a ChatGPT browser window into a text editor is effectively over for anyone serious about building tools. To vibe-code effectively, you need an integrated environment where the LLM has direct access to your file structure. This allows the AI to “see” the entire project, understand how different files interact, and suggest changes that don’t break existing functionality.
The current gold standard for this workflow is Cursor. Based on VS Code, Cursor allows you to use models like Claude 4.6 Opus or Gemini 3 Pro directly within your coding environment. For many, the journey starts on a free hobby plan, but as you realize the efficiency of having an AI partner that understands your codebase, moving to a Pro tier becomes almost inevitable. However, Cursor is not the only player in the game. Here are the primary environments currently used by professional vibe-coders:
Cursor
This is the most popular choice for a reason. Its interface is intuitive, and it offers features like “Composer,” which can write code across multiple files simultaneously. It is highly customizable and integrates seamlessly with your existing terminal and Git workflows.
Windsurf
The main competitor to Cursor, Windsurf distinguishes itself with its “Flow” feature. It is designed to be more autonomous, capable of running terminal commands to test the code it just wrote and self-correcting based on the error messages it receives. This reduces the “hand-holding” required by the user.
Google Antigravity
A newer entrant that moves away from the traditional file-tree view. Antigravity focuses on a “fleet of agents” approach, where you direct multiple autonomous agents to build, test, and deploy features. It is built for scale and focuses on high-level direction rather than line-by-line editing.
Why prompting alone isn’t enough
Many SEOs approach LLMs with the assumption that a “perfect prompt” will result in a perfect tool. This is a misconception. While prompting is important, the real bottleneck in vibe-coding is the “context window.” This refers to the amount of information the model can hold in its active memory at any given time. While modern models like Gemini 3 Pro boast windows of up to 1 million tokens, the quality of retrieval degrades as that window fills up.
LLMs utilize attention mechanisms that naturally favor the beginning and end of the provided text. This is known as the “lost in the middle” phenomenon. If you stuff your context window with 50,000 lines of code, the model may forget a crucial instruction you gave it ten minutes ago. To prevent this, you must adopt a “one team, one dream” philosophy: break your project into logical stages and clear the LLM’s memory between them. This ensures the model is always focused on the specific task at hand without being distracted by irrelevant background noise.
Furthermore, you must maintain a “trust but verify” mindset. Even when vibe-coding, you should understand the directional options for your project. If the AI suggests a complex scraping library when a simple API call would suffice, you need the foundational knowledge to steer it back on track. Troubleshooting should always involve asking the model to explain its logic before it executes a fix.
Tutorial: Let’s vibe-code an AI Overview question extraction system
To demonstrate the power of structured vibe-coding, we will walk through the creation of an SEO tool designed to extract the implied questions answered by Google’s AI Overviews (formerly SGE). In the modern SEO environment, ranking often depends on answering the specific questions Google deems relevant enough to highlight in its generative summaries. By building a tool that extracts these questions, you can create content that is perfectly aligned with Google’s current understanding of a topic.
Step 1: Planning and Brainstorming
Before touching a code editor, you must define the logic of your system. It is often helpful to use a standard LLM interface like Gemini or ChatGPT to map out the architecture. Start with a high-level description of your goal and the necessary steps. For our AI Overview extractor, the plan looks like this:
- Select a target search query.
- Conduct a search and extract the AI Overview content using a reliable API.
- Pass that content to an LLM to identify the implied questions.
- Save the questions and the source snippets to a permanent log for analysis.
During this phase, ask your LLM to be critical. Ask it to suggest the simplest path and warn you about potential pitfalls like bot detection. For instance, rather than trying to build a custom scraper that Google will quickly block, use a service like SerpAPI. It handles the proxy management and DOM parsing for you, providing a clean JSON output of the AI Overview.
Step 2: Setting the Groundwork in Cursor
Once you have a plan, open Cursor and set up your project. One of Cursor’s strengths is the ability to toggle between different models. For the initial setup, a reasoning model like Claude 4.6 Opus or Gemini 3 Pro is ideal. Start in “Plan Mode.” This allows the AI to discuss the implementation with you without writing a single line of code yet.
Paste your refined plan into the chat. The AI will likely ask clarifying questions. For example, it might ask if you want to store just the questions or the full context of the AI Overview. In our case, we want the “context snippets”—the specific parts of the AI Overview that answer the questions—so we can see how Google is sourcing its information. Once the discussion is complete, ask the AI to generate a `plan.md` file. This file serves as the “source of truth” for the project, preventing the LLM from drifting as you move into the building phase.
Step 3: The Building Phase and Environment Setup
Switch Cursor to “Agent Mode.” This mode allows the AI to create files, run terminal commands, and install dependencies. The first task is to set up a virtual environment (`.venv`). This is a critical step for non-developers; it creates an isolated container for your project so that the libraries you download don’t interfere with other projects on your computer.
The AI should generate a `requirements.txt` file containing the necessary libraries:
- `google-search-results` (for SerpAPI)
- `openai` (to access the GPT API)
- `weave` (for logging and tracing)
- `python-dotenv` (to manage secret API keys)
Use the terminal to activate your environment and install these dependencies. Then, create a `.env` file. This is where you will store your SerpAPI and OpenAI keys. Never hard-code these keys directly into your script, as this is a major security risk, especially if you plan to share your code on GitHub.
Handling Hiccups and Troubleshooting
Vibe-coding is rarely a straight line. You will inevitably encounter errors. In our project, a common issue is the tool failing to find the AI Overview in the search results. This often happens because the structure of search engine results pages (SERPs) changes frequently, or the specific query doesn’t trigger a generative response.
When an error occurs, do not just tell the AI “it’s broken.” Instead, provide context:
- Copy the full error message from the terminal and add it to the chat.
- Provide a screenshot or the raw JSON output from the API to show the AI what it is actually “seeing.”
- Direct the AI to a specific resource, such as the SerpAPI documentation for AI Overviews.
By providing this level of detail, you prevent the LLM from guessing. It’s also wise to instruct the AI: “Review the documentation and suggest a fix, but do not edit any files until I approve the approach.” This keeps you in control and prevents the model from entering a “fix-break-fix” loop that wastes tokens and creates messy code.
Logging and Tracing with Weave
A tool is only useful if its outputs are reliable. For SEO work, you need a way to review the questions extracted by the tool and the prompts that generated them. This is where a tool like Weave (from Weights & Biases) becomes invaluable. Weave provides a permanent record of every “trace” or execution of your script.
When you run your extraction tool, Weave logs the input query, the raw AI Overview text, the specific prompt sent to the LLM, and the final list of questions. If you notice that the questions are becoming repetitive or off-topic, you can look back at the traces to see if the problem lies with the source data or the extraction prompt. This level of observability is what separates a “vibe” from a professional-grade SEO tool. It allows you to refine your “vibe” based on data rather than intuition.
Structure beats vibes
The term “vibe-coding” suggests a relaxed, improvisational approach to development. While the initial stages are indeed more fluid than traditional coding, the successful execution of a project depends on structure. Without a clear plan, a dedicated environment, and a rigorous logging system, you will eventually lose control of the LLM. It will start creating “spaghetti code” that is impossible to debug, or it will hallucinate features that don’t exist.
By following the methodology outlined here—planning in one model, building in another, maintaining a `plan.md` file, and using tracing tools like Weave—you can harness the power of AI to build sophisticated SEO tools that would have previously required a full development team. Vibe-coding is a superpower for the modern marketer, but only if you have the discipline to keep the vibes grounded in a solid architectural framework. Keep the vibes high, but keep your structure even higher.