How I’d Start an AI-Assisted Development Project in 2026
A practical 2026 guide to starting an AI-assisted software project — tools, agent orchestration, Git rules, baselining, documentation, and lessons learned.
How I’d Start an AI-Assisted Development Project in 2026
A practical 2026 guide to starting an AI-assisted software project — tools, agent orchestration, Git rules, baselining, documentation, and lessons learned.
Introduction
Yes, this post most likely won’t be relevant in a couple of months, but I will still share my thoughts on how to run an AI-assisted development right now. Importantly, this post is not a comprehensive guide - I am rather structuring some of the thoughts but I might be missing many important things. I’ll keep on posting them in the future.
What are the tools to use?
I switched to using only CLI and I almost stopped using IDE as it is too slow.
I am also a big fan of voice-to-text - it’s awesome (e.g. WisprFlow). I don’t always use it, I don’t understand why. I think it’s psychological.
As a terminal, I currently use Warp. I like it, but I had a hard time setting up notifications when agents finish their work. I managed to do it in the end, but it’s not what I wanted - I get a pop up notification on top right of the screen, but I wanted to have an indicator of number of agents who finished their work on the icon in dock - I didn’t manage to implement this. So I might switch to another terminal later.
How to organize the agents?
Considering that we don’t have budget restrictions, we want to have as many agents running in parallel as possible so we don’t lose control. The key question is how to orchestrate these agents.
I currently cannot efficiently handle more than 5 agents - I think it’s my limitation and it’s feasible to go to up to 8-10 in future with a better development process.
I set up the agents in such a way that their scope doesn’t overlap. For example, Agent 1 works on UI, Agent 2 works on backend tasks, Agent 3 runs tests, Agent 4 ensures high quality of the content, Agent 5 is used to generate new ideas with me.
A small trick that helps me a lot is to give every tab in Warp with an Agent a name of a feature, a bug or an idea it is working on. This helps me not to get lost with many different tabs and efficiently navigate through them.
How to work with Git?
I am a bit paranoid about git (maybe because I don’t consider myself as an absolute expert in it), so I used to git commit / git push everything myself. This was not a great idea - it slows down the process a lot and the quality of my commit messages got deteriorated. So now agents commit and push, but I have basic guardrails in CLAUDE.md. Right now they look something like that:
Git Rules
When code changes are complete, inform the user what files were modified and provide a summary of the change.push new features / bug fixes to git with a comprehensive commit message. Display the commit message to user as wellif you are not sure whether to git push the change, bring this up to the user with explaining the issue
What to document?
With AI-assisted development taking over the documentation should be done for agents, not for users. agents need to understand what exists, how things connect, and what changed — without reading every file. I use the following documentation files, which I recommend setting up from the start:
- ARCHITECTURE.md — System structure, key design decisions, and naming conventions. This is the most important file — it's the map that prevents the agent from reinventing or contradicting your existing patterns.
- DATABASE.md — Schema definitions, column meanings, and relationships. Without this, agents guess column names and get them wrong. If your project has a database, this file pays for itself on the first session.
- CHANGELOG.md — Reverse-chronological log of what changed and why. Agents append to this after every task, creating an audit trail that carries context across sessions.
- FEATURES.md — What the product does today. Prevents agents from building something that already exists or breaking functionality they didn't know about. Potentially this should also document features that didn’t fly and got discontinued and the reasons why.
- SCRIPTS.md — Available CLI scripts with usage examples. Agents will write one-off scripts for tasks that a utility already handles — this file stops that.
- PIPELINE.md — Data flows, processing stages, and integration points. Relevant for any project with background jobs, ETL, or multi-step workflows.
Then I might have a bunch of other files including various SKILLS.md (e.g. for running specific analytics), QUALITY_CHECKS.md (to ensure high quality content) or AGENT.md (if I have an agent integrated in the product. But these are more project-specific.
How to run Baselining?
I am a big fan of what I call baselining - identifying which resources (including code) are used and deleting what’s not used. I think with AI-assisted development it’s essential to automate baselining. I am not yet there, but I run ad-hoc baselining sessions with agents on a regular basis.
With that, I have the following instructions in CLAUDE.md:
Delete unused or obsolete files when your changes make them irrelevant (refactors, feature removals, etc.), and revert files only when the change is yours or explicitly requested.
I don’t think it’s working very well - I would like to see more proactive baselining and dead code removal from the agents. So we are working on it.
How to test efficiently?
This is a hard one, and I feel that I am quite far away from cracking it.
This is what I currently have in CLAUDE.md:
Always run relevant tests at the end of your development. If you develop for the frontend, run frontend tests and don’t hesitate to leverage a browser. If you develop the backend, run backend tests. If you develop a feature that impacts both, run all the tests.
I acknowledge that this is not great, and I actually spend a lot of my time on manual testing of stuff, asking the agents to write and run more tests, and, unfortunately, fixing bugs not covered with tests.
Conclusion
I want to emphasize that most content of this post might not be relevant already in a couple of months. We are all learning how to operate in the new reality for AI assisted coding and needless to say that things move fast in this space. However, I hope some of the principles will remain:
- As we want to make our development process as efficient and effective as possible, we need to maximize the throughput per minute from the agents;
- We want to keep documentation for agents up-to-date and comprehensive;
- We want the agents to test everything properly and to keep the project, resources and code clean.
The real trick is staying humble while machines are getting fast. I’m not sure what this process will look like in three months — but I’m certain it won’t be boring.
I hope you enjoyed the post - I wrote it myself without using any GenAI (almost 🙂).
Written by Egor Burlakov
Engineering and Science Leader with experience building scalable data infrastructure, data pipelines and science applications. Sharing insights about data tools, architecture patterns, and best practices.