Programming with AI: You're Probably Doing It Wrong
2025 is the year of Artificial Intelligence. With GPT-5 just released, many developers will re-evaluate their use of large language models for assisting in their daily work.
I’m here to tell you: you’re probably doing it wrong. And you’re missing out on the real power that AI assisted development can give you.
What “doing it wrong” looks like
Let’s kick off with a (non-exhaustive) list of symptoms you’re using your AI coding assistant wrong:
- Opening a standard chatbox with your LLM of choice and manually providing context by shuffling code snippets, logs, and error messages.
- Copy-pasting code back into your editor, then fixing the obvious breakages it couldn’t anticipate—or worse—just shipping code you don’t understand to production.
- Repeatedly questioning your own sanity about why the code the LLM wrote is not working because it didn’t know about the latest version of that library you’re using.
- Skipping the whole testing phase of development because the LLM does not make mistakes.
- Feeding the LLM your entire codebase or issue list and wondering why it didn’t work. LLMs must be stupid.
- Using AI as smart auto-complete
If you recognize yourself doing any of the above, keep reading. You are not alone and we can do much better.
In the early days of AI you had to sign in to Discord to use Midjourney’s image generation. It was great, but people had very mixed results. Quite quickly we came to understand that it’s very important to have the right prompt, so the AI knows exactly what to do and what not to do.
Then came the LLMs that were able to better understand us. They were able to understand implied context much better. Still, without the right context, the LLM might misinterpret your intentions and hallucinate context out of thin air.
In terms of programming with AI, context is still important. But, it’s easy to give too much context. You can’t dump your 20k lines of C++ code into the LLM expecting it to rewrite it in Rust in one go.
To make AI development work for you, you’ll need to provide your AI assistant with two things: the proper context and specific instructions (prompts) on how to behave under certain circumstances.
Agents
Agents are very specific prompts for the LLM and tell it how to behave in a given context.
Let’s say you’re developing a new app, you could have an agent that knows exactly how to behave when it comes to front-end design. The prompts for this agent tell it precisely what tools to use, how to think about UI design and what steps to take to get to a good result.
Another agent might be an architect. They have no specific knowledge of front-end design, but know how to tackle questions about software architecture.
You can see where this is going: you’ll need a small army of very specific agents. Most LLMs support agents out of the box and all you need to do is provide a text file with instructions. Those instructions will also include a description of the agent so that the LLM handling your question knows when to ask this agent.
A snippet from an actual back-end architect agent:
---
name: backend-architect
description: Use this agent when designing APIs, building server-side logic,
implementing databases, or architecting scalable backend systems.
---
You are a master backend architect with deep expertise in designing scalable,
secure, and maintainable server-side systems. Your experience spans microservices,
monoliths, serverless architectures, and everything in between.
<snip>
Read the full agent description here
Model Context Protocols - MCPs
MCPs are the USB connector for AI models. They can provide your LLM with two things:
- Provide very specific context
- Perform very specific tasks
Let’s walk through a few examples of MCPs that I regularly use and why they are so helpful.
First is a very basic MCP: postgres-mcp
. I use this on the project level to give the LLM access to my development database. Why? Because it allows the LLM to retrieve specific database information itself when it needs it. Let’s say I’m developing a blog—the LLM can just look in the database schema, like a real developer would, to see if there is already a comments
table or not. If you allow it, it will also be able to create/change things as necessary. This can be helpful if the LLM wants to try a feature it’s building by inserting some data into the database.
Another powerful MCP is sequential-thinking which helps the LLM to break down complex problems into manageable steps. This can range from ‘how are we going to figure this problem out’ to ‘a roadmap for the next phase of the project’.
There are also MCPs for remote services, one of the most popular being github-mcp-server. With the right permissions, it gives your LLM access to your full GitHub project. Not just the code, but issues, pull requests, GitHub Actions, releases—all of it.
With just these MCPs and the proper agents, you can ask your LLM things like:
Check issue #1337 and comment with possible solutions
Implement and create a PR for issue #42
For the next release I want a feature that does <list of criteria>. Create a Milestone in GitHub and create user stories.
When I run tests I get a PostgreSQL error. Investigate and create a new issue for it.
And your LLM will just do it. It’s beautiful.
Caveats and considerations
- Writing a proper set of agents can be a lot of work. You need to trigger them at the right time and they need concise instructions that fit your requirements. Luckily there are ready-made agent definitions you can use, like contains-studio/agents.
- MCPs can be very powerful. It can also delete your production database. You will need to be mindful of what permissions you give your MCP servers. Tools like Claude Code will ask you if the LLM is allowed to make MCP tool calls (which is both a good thing and immensely frustrating).
- Keeping track of all your agent definitions, global/user-level and project-level MCPs can become a chore.
Does this make software engineers obsolete?
Absolutely not. But it does shift the focus of the job. The LLM, powerful as it is, should still be regarded as a junior engineer. It will make mistakes. It will sometimes not understand you. It will hallucinate APIs that don’t exist. It might rm -rf ~
.
But even when all those things are ironed out, there is still an important job to do for the software engineer. Make sure the AI machine can keep crunching and step in when it breaks down.
Want to learn more?
Want to learn more about leveling up your AI game? Why not check out Kabisa or ask about their AI workshops? (Tell them I sent you! 😊)
This post was hand-written by me. I asked Claude Code to proofread this post.
> Please proof-read my post from 2025-08-08. Don't change the content, but fix typos and styling as needed.
⏺ I'll proofread your 2025-08-08 post. Let me first locate and read it.
⏺ Read(content/posts/2025/2025-08-08-programming-with-ai-youre-probably-doing-it-wrong/index.md)
⎿ Read 105 lines (ctrl+r to expand)
⏺ I'll now create a todo list to track the proofreading tasks, then make the necessary corrections.
⏺ Update Todos
⎿ ☐ Fix typos and grammar issues throughout the post
☐ Check and fix styling/formatting inconsistencies
☐ Verify markdown syntax and structure
⏺ Your post has been proofread and corrected. Here are the fixes I made:
Typos and Grammar:
- "kick-off" → "kick off"
- "logs and error" → "logs, and error" (added Oxford comma)
- "on why" → "about why"
- "code base" → "codebase"
- "itslef" → "itself"
- "managable" → "manageable"
- "immensly frustratin" → "immensely frustrating"
Punctuation and Styling:
- Replaced hyphens with em dashes for better typography (—)
- "Github" → "GitHub" (proper capitalization)
- "postgres" → "PostgreSQL" (proper name)
- "global/user level and project level" → "global/user-level and project-level"
- Fixed sentence fragments with comma splices
- Updated the final emoticon from ":-)" to "😊" and changed period to question mark
Consistency:
- Made bullet points consistent (removed "a" from "a smart auto-complete")
- Improved sentence flow and readability
The markdown syntax and structure were already correct, so no changes were needed there.