launchthatbot
How I Used LaunchThatBot to Build LaunchThatBot
I built the backend and infrastructure first -- Docker, multi-agent orchestration, Convex pipelines. Then I deployed a squad of AI agents that helped me finish building the product they were running on.
Ready to apply this in your own deployment?
See how it worksThere is a moment in every bootstrapped project where you realize the thing you are building could help you build it.
For LaunchThatBot, that moment came about halfway through development. The backend was working. Docker containers were deploying. The multi-agent infrastructure was functional. And I thought: why am I still doing all of this alone?
I was building a platform for deploying and managing AI agent squads. I had agent squads. The obvious move was to use them.
So I did. I deployed a squad of five agents on the infrastructure I had just built, and they helped me finish the product they were running on. This is the story of how that worked.
Phase one: backend and infrastructure first
Before any agent could help me, I needed the foundation they would run on. This was days of solo work -- the unsexy, essential kind.
Docker and container orchestration
Every OpenClaw deployment in LaunchThatBot runs in a Docker container. Getting that right meant building:
- Container templates with sane defaults for networking, security, and resource limits
- A deployment pipeline that could provision, configure, and launch containers on remote VPS providers
- Health checking and restart logic so containers could recover without manual intervention
- Log aggregation so I could actually see what was happening inside each container without SSH
This was the work I described in the founding post. The kind of infrastructure work that is invisible when it works and catastrophic when it does not.
Convex as the operational backbone
Every piece of state in LaunchThatBot flows through Convex. Deployment records, agent configurations, health status, event logs -- all of it lives in Convex tables with real-time subscriptions.
Building this layer meant designing schemas, writing mutations, setting up scheduled functions for polling and cleanup, and making sure the dashboard could reflect the true state of every deployment at any moment. It also meant building the import pipeline so agents could sync their data into Convex from external sources.
The management dashboard
The Next.js dashboard had to exist before agents could be useful. Without it, there was no interface for monitoring what agents were doing, no way to configure squad behavior, and no way to see if the whole system was actually working.
I built the first version of the dashboard as a traditional solo developer: writing components, designing code patterns, iterating on the UI. It was functional but far from finished.
That is when I realized I had built enough infrastructure to start dogfooding.
Phase two: deploying the squad
I set up five agents, each with a specific role. They ran on the same LaunchThatBot infrastructure I was building for customers -- same Docker containers, same Convex backend, same dashboard for monitoring.
This was not a theoretical exercise. These agents were doing real work on the real codebase.
The personal assistant agent
This was the squad coordinator. Its job was to help me manage the other four agents -- keeping track of what each one was working on, surfacing blockers, and helping me prioritize.
Think of it as a project manager that never sleeps and never forgets context. When I started a work session, the assistant could tell me what had happened since I last checked in: which pull requests had been opened, what the junior agents had produced, what the senior agent had flagged for review.
It also handled the meta-work of squad management. Updating agent configurations, adjusting prompts based on what was and was not working, and keeping a running log of decisions made so I did not lose context across sessions.
Managing a squad of agents is itself a task that benefits from an agent. That is a sentence that sounds recursive until you experience it.
Two junior developer agents
These were the workhorses. Both ran on free-tier models -- no premium API costs. Their job was to grind through the kind of development work that is straightforward but time-consuming:
- Generating boilerplate components from specifications
- Writing initial implementations of well-defined features
- Creating test scaffolding
- Producing first drafts of utility functions and helpers
- Filling in repetitive patterns across the codebase
The key insight with junior agents is scope. They are remarkably productive when given tightly scoped tasks with clear inputs and expected outputs. They struggle when the task is ambiguous or requires architectural judgment.
So I treated them like actual junior developers. Clear tickets. Specific acceptance criteria. Small, mergeable units of work. They would produce code, push it to a branch, and the next agent in the chain would pick it up.
Were their outputs perfect? No. But they were a starting point that was faster than writing everything from scratch. And the imperfections were consistent enough that reviewing and fixing their work became a predictable process rather than a surprise every time.
The senior full-stack developer agent
This was the quality gate. It monitored the git branches that the junior agents pushed to, pulled their code, and performed structured reviews.
Its workflow looked like this:
- Watch for new commits from the junior agents
- Pull the branch and analyze the changes
- Run a code audit: type safety, patterns consistency, security concerns, performance implications
- If the changes were clean, suggest the optimal implementation plan for merging -- which files to touch, what order to apply changes, and what to watch out for
- If the changes had issues, flag them with specific feedback and send the analysis back
The senior agent did not just say "this is bad." It said "this function is missing error handling for the case where the Convex query returns null, and here is the pattern used elsewhere in the codebase." It had context on the project's conventions because it had reviewed enough of the codebase to understand them.
This agent ran on a more capable model. The cost was higher per query, but it reviewed far less volume than the juniors produced. The economics worked: cheap models for volume, expensive models for judgment.
The marketing agent
The fifth agent handled everything on the go-to-market side:
- Blog articles. Yes, some of the articles on this blog started as drafts from the marketing agent. It understood the product because it had access to the same Convex data that powered the dashboard. It knew what features existed, what was in development, and what problems we were solving.
- Affiliate program content. LaunchThatBot's affiliate program needed landing pages, email sequences, and partner documentation. The marketing agent produced first drafts of all of these.
- Social content and positioning. Messaging for different personas, comparison frameworks, feature announcements -- the kind of content that a solo founder typically puts off because there is always more code to write.
The marketing agent was valuable because it freed me from context-switching. Writing marketing copy after a day of debugging Docker networking is brutal. Having an agent that stays in "marketing mode" continuously meant the go-to-market work kept moving even when I was deep in infrastructure.
What I learned from dogfooding
The squad is more than the sum of its parts
Five agents working independently would have been useful. Five agents working as a squad -- with shared context, coordinated handoffs, and a human (me) making the architectural decisions -- was a fundamentally different experience.
The junior agents produced volume. The senior agent ensured quality. The marketing agent kept the business moving. The personal assistant kept me sane. Each agent amplified the others because they were operating on the same codebase, the same Convex instance, the same shared understanding of the project.
Free models are genuinely useful for the right tasks
There is a tendency to assume that only the most capable models are worth using. The junior agents proved that wrong. For well-scoped, clearly defined tasks, free-tier models produced work that was consistently good enough to be a meaningful accelerator.
The key is "well-scoped." If I gave a junior agent a vague task like "improve the dashboard," the output was useless. If I gave it a specific task like "create a React component that displays a deployment's health status using these specific Convex queries and this design system," the output was a solid starting point.
The review loop matters more than the generation
The most valuable part of the squad was not what the junior agents produced. It was the feedback loop between the juniors and the senior agent. That loop -- generate, review, refine -- is what turned rough drafts into production code.
Without the senior agent, I would have spent all my time reviewing junior output myself. With it, I was reviewing the senior agent's analysis of the junior output, which is a much faster process. The senior agent had already identified the issues and suggested fixes. I just had to decide whether I agreed.
You end up building for your own agents
Dogfooding the squad meant I experienced every friction point in the platform firsthand. When an agent's deployment was hard to monitor, I felt it. When the dashboard was missing a feature I needed, I noticed it immediately.
This is the benefit of building a tool and using it simultaneously. Every annoyance becomes a feature request from a real user -- yourself. The roadmap writes itself.
The meta moment
There is something philosophically satisfying about a product that helped build itself. But the practical lesson is more useful than the philosophical one.
The practical lesson is this: if you are building infrastructure for AI agents, you should be running AI agents on that infrastructure as early as possible. Not as a demo. Not as a proof of concept. As actual contributors to the project.
The agents did not build LaunchThatBot. I built LaunchThatBot. But they made it possible for one person to do the work that would normally require a small team -- by handling the volume work, maintaining quality gates, keeping the marketing moving, and managing the coordination overhead.
That is what a squad does. Not replace you. Multiply you.
If you have been thinking about how AI agents could accelerate your own projects, the answer is probably not "one agent doing everything." It is a squad of agents with clear roles, appropriate model tiers, and a review process that catches what the cheaper models miss.
That is what I built. That is how I built it. And that is what LaunchThatBot makes possible for you.
Ready to apply this in your own deployment?
See how it worksRelated articles
Feb 17, 2026
Spend Your AI Tokens on Building, Not Setup
I burned $40 in AI credits just configuring OpenClaw. No dashboard, no way to duplicate configs, no observability. That is when I stopped prompting and started coding.
Feb 10, 2026
I Built LaunchThatBot Because Deploying OpenClaw Shouldn't Be This Hard
Confusing configs, no management dashboard, and redoing the same painful setup every time. This is the story of why LaunchThatBot exists.
Feb 19, 2026
How to Run a Squad of AI Agents for $25/mo
A Hetzner VPS for $5. A MiniMax subscription for $20. Three agents with different roles sharing memory through Convex. Here is the exact setup.