It's All About the Vibes — Pair Programming with Artificial Friends

- By Conor Hodder
Blog post image

Vibe Coding is the latest buzzword, but let’s be honest: it’s not ready for solo development, but some components have really interesting benefits.

Pair programming was once a core part of Extreme Programming. I hated it. I do my best work alone, in flow. But the downside? No one to catch my mistakes or challenge my direction — until now. With an Agentic AI pair programmer, I get instant feedback, a second set of eyes, and a partner who never gets tired or distracted. The result? Fewer bugs, faster learning, and a lot less wasted time.


Setting Up for Success

  • IDE: I use Cursor for its balance of cost and productivity. There are alternatives, but Cursor’s integration with AI agents is unmatched. I’ve tried VSCode, Windsurf, and a few others, but nothing else has delivered the same experience at the same price point. The key is to pick a tool that doesn’t get in your way and lets you focus on the work, not the setup.
  • Model: Claude 3.7 Sonnet isn’t perfect, but it’s free with Cursor Pro. For offline work, I use Qwen2.5 7B Coder on my MacBook. I’ve experimented with other models — Claude 4.0, GPT-4, and some local LLMs — but the real value comes from how you use them, not which one you pick. Don’t get stuck in analysis paralysis. Pick a model, learn its quirks, and push it hard.
  • MCP Servers: Github MCP automates my PRs, commit messages, and reviews so I can focus on building, not busy work. Context7 keeps my agent’s documentation current — this is especially powerful for working with bleeding-edge frameworks or libraries where the docs are always changing.

Configuring Cursor with User and Project Rules

If you want your AI agent to deliver real value, you need to give it the right context. Cursor lets you define both User Rules (global, persistent across all projects) and Project Rules (specific to a single repo or codebase). Most people skip this step and then wonder why their agent is clueless — don’t make that mistake.

How to set up User and Project Rules in Cursor

  1. Open Cursor and go to the Command Palette (Cmd+K).
  2. Search for ‘Edit User Rules’ or ‘Edit Project Rules’.
  3. Write clear, direct instructions.

 

Example User Rule:

Treat me as a senior engineer. Prioritize clarity, brevity, and ruthless honesty in all feedback. Never assume I want comfort—only truth. If I challenge you, I expect you to critically assess your initial implementation and push back if you believe you are correct.

Example Project Rule:

This is a TypeScript monorepo using [Next.js](https://nextjs.org/) and [Prisma](https://www.prisma.io/). Follow our internal style guide ([link](https://your-style-guide-link.com)). All new code must include unit tests and use our logging utility. Jira tickets are tracked at [jira.company.com](https://jira.company.com).

Why this matters:

  • The agent can’t read your mind. If you don’t tell it what matters, it will default to generic advice and boilerplate code.
  • User Rules set the tone for how the agent interacts with you. If you want brutal honesty and high standards, say so.
  • Project Rules cut down on wasted time by giving the agent the context it needs to make smart decisions — frameworks, conventions, even business goals. These can be committed to your repository to ensure that all team members are using the same baseline.

Pro tip:

Update your rules as your project evolves. If you add a new library, change your testing strategy, or shift business priorities, reflect that in your Project Rules. Treat this as a living document, not a one-off setup.


How to Actually Pair with an AI Agent

  1. Start with a clear ticket or task. If you don’t know what you’re building, neither will your agent. I always write out the requirements, acceptance criteria, and any edge cases before I start. The clearer you are, the better your agent performs.
  2. Let the agent take the first pass — don’t micromanage. This is where most people screw up. They treat the agent like a junior dev who needs hand-holding. Don’t. Give it the problem, let it run, and only step in if it gets stuck or goes off track. You’ll be surprised how often it nails the basics.
  3. Intervene only to correct, clarify, or push for better solutions. If the agent misses something, don’t just fix it — ask why. Was the prompt unclear? Did it lack context? Use every mistake as a chance to improve your process. Sometimes, the agent will suggest something you hadn’t considered. Don’t dismiss it out of hand — challenge your own assumptions.
  4. Always ask: Have we met the requirements? Are the tests sufficient? Is the code readable and well-logged? I have a checklist I run through at the end of every session. If the answer to any of these is “no,” I loop back and fix it. The agent is great at generating code, but it’s your job to ensure it meets your standards. I also ask the agent to review its own work — sometimes it catches things on the second pass that it missed the first time.
  5. Treat the agent like a peer, not a servant. Sometimes it will be right, and sometimes it will be wrong. Treating it just like you would any other peer — challenging it, questioning it, and giving it the tools it needs to defend its opinion is critical to a successful, collaborative approach to development.

 

Example:

Recently, I had to integrate a new SaaS I had never worked with into our stack. I gave the agent the API docs, a list of requirements, and let it generate the initial implementation. It got 80% of the way there on the first try. I stepped in to clarify some business logic, added a few missing tests, and asked it to refactor for readability. The whole process took less than half the time it would have to code solo, and the code was cleaner.


What I’ve Learned

  • Agents are only as good as the context you give them. If you’re vague, you’ll get vague results. Be explicit about requirements, constraints, and edge cases. I keep a running list of Project & User Rules that I update as I go.
  • Agents accelerate learning with new tech — if you ask the right questions. I’ve used agents to ramp up on unfamiliar frameworks, debug gnarly issues, and even write migration scripts for legacy systems. The trick is to treat the agent like a collaborator, not a tool. Ask it to explain its reasoning, challenge its assumptions, and push it to justify its choices.
  • Pair programming with AI improves code quality, period. I’ve caught more bugs, written better tests, and shipped faster since I started pairing with agents. The feedback loop is instant, and the quality bar is higher. If you’re not using this, you’re falling behind.

Pitfalls to avoid:

  • Don’t blindly accept the agent’s output. Always review, test, and validate.
  • Don’t get lazy with prompts. The more effort you put in up front, the better the results.
  • Don’t treat the agent as infallible. It’s a partner, not a replacement for your judgment.

The Future

Pairing with AI isn’t the future — it’s the present. But don’t get comfortable. By the end of 2025, we’ll be reviewing agent work, not pairing with it. The pace of change is only accelerating. If you’re not adapting, you’re already behind. My advice: start now, experiment aggressively, and build the muscle before everyone else realises too.

Bottom line:

AI pair programming is a force multiplier. Use it ruthlessly, refine your process, and never settle for average. The winners will be those who adapt fastest and set the highest standards — for themselves and their agents.

About Conor Hodder

Technical Lead at Kablamo